Test Report: Hyper-V_Windows 19312

                    
                      c58167e77f3b0efe0c3c561ff8e0552b34c41906:2024-07-22:35447
                    
                

Test fail (21/201)

x
+
TestOffline (294.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-749900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p offline-docker-749900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: exit status 90 (3m39.244141s)

                                                
                                                
-- stdout --
	* [offline-docker-749900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "offline-docker-749900" primary control-plane node in "offline-docker-749900" cluster
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Found network options:
	  - HTTP_PROXY=172.16.1.1:1
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	  - HTTP_PROXY=172.16.1.1:1
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 02:14:03.355045   10564 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0722 02:14:03.452528   10564 out.go:291] Setting OutFile to fd 580 ...
	I0722 02:14:03.452673   10564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 02:14:03.452673   10564 out.go:304] Setting ErrFile to fd 872...
	I0722 02:14:03.452673   10564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 02:14:03.479166   10564 out.go:298] Setting JSON to false
	I0722 02:14:03.482164   10564 start.go:129] hostinfo: {"hostname":"minikube6","uptime":129651,"bootTime":1721484792,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0722 02:14:03.482164   10564 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 02:14:03.491161   10564 out.go:177] * [offline-docker-749900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0722 02:14:03.500165   10564 notify.go:220] Checking for updates...
	I0722 02:14:03.507162   10564 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 02:14:03.515175   10564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 02:14:03.523717   10564 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0722 02:14:03.535025   10564 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 02:14:03.541865   10564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 02:14:03.546316   10564 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 02:14:03.547285   10564 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 02:14:10.230279   10564 out.go:177] * Using the hyperv driver based on user configuration
	I0722 02:14:10.234684   10564 start.go:297] selected driver: hyperv
	I0722 02:14:10.234684   10564 start.go:901] validating driver "hyperv" against <nil>
	I0722 02:14:10.234684   10564 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 02:14:10.294643   10564 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 02:14:10.295571   10564 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 02:14:10.295571   10564 cni.go:84] Creating CNI manager for ""
	I0722 02:14:10.295571   10564 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 02:14:10.295571   10564 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 02:14:10.295571   10564 start.go:340] cluster config:
	{Name:offline-docker-749900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-749900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 02:14:10.296568   10564 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 02:14:10.300571   10564 out.go:177] * Starting "offline-docker-749900" primary control-plane node in "offline-docker-749900" cluster
	I0722 02:14:10.303575   10564 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 02:14:10.304209   10564 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0722 02:14:10.304209   10564 cache.go:56] Caching tarball of preloaded images
	I0722 02:14:10.304616   10564 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 02:14:10.304616   10564 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 02:14:10.305136   10564 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\offline-docker-749900\config.json ...
	I0722 02:14:10.305136   10564 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\offline-docker-749900\config.json: {Name:mk1dc729a95c3ba851f30a175be8f243a8bbba96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 02:14:10.306560   10564 start.go:360] acquireMachinesLock for offline-docker-749900: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 02:14:10.306560   10564 start.go:364] duration metric: took 0s to acquireMachinesLock for "offline-docker-749900"
	I0722 02:14:10.306560   10564 start.go:93] Provisioning new machine with config: &{Name:offline-docker-749900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-749900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 02:14:10.306560   10564 start.go:125] createHost starting for "" (driver="hyperv")
	I0722 02:14:10.311573   10564 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0722 02:14:10.312034   10564 start.go:159] libmachine.API.Create for "offline-docker-749900" (driver="hyperv")
	I0722 02:14:10.312034   10564 client.go:168] LocalClient.Create starting
	I0722 02:14:10.312638   10564 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0722 02:14:10.313031   10564 main.go:141] libmachine: Decoding PEM data...
	I0722 02:14:10.313031   10564 main.go:141] libmachine: Parsing certificate...
	I0722 02:14:10.313245   10564 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0722 02:14:10.313629   10564 main.go:141] libmachine: Decoding PEM data...
	I0722 02:14:10.313629   10564 main.go:141] libmachine: Parsing certificate...
	I0722 02:14:10.313793   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0722 02:14:12.561113   10564 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0722 02:14:12.561397   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:12.561494   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0722 02:14:14.811572   10564 main.go:141] libmachine: [stdout =====>] : False
	
	I0722 02:14:14.811921   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:14.811997   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 02:14:16.737819   10564 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 02:14:16.737819   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:16.737819   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 02:14:21.268388   10564 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 02:14:21.268388   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:21.274062   10564 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 02:14:21.813095   10564 main.go:141] libmachine: Creating SSH key...
	I0722 02:14:21.893911   10564 main.go:141] libmachine: Creating VM...
	I0722 02:14:21.893911   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 02:14:24.893104   10564 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 02:14:24.893104   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:24.893104   10564 main.go:141] libmachine: Using switch "Default Switch"
	I0722 02:14:24.893104   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 02:14:26.714182   10564 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 02:14:26.714182   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:26.714182   10564 main.go:141] libmachine: Creating VHD
	I0722 02:14:26.714294   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0722 02:14:30.578300   10564 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900\fixe
	                          d.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : EBBB6C06-BDE0-46CD-A989-60E91C96CC52
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0722 02:14:30.578300   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:30.579374   10564 main.go:141] libmachine: Writing magic tar header
	I0722 02:14:30.579374   10564 main.go:141] libmachine: Writing SSH key tar header
	I0722 02:14:30.590023   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0722 02:14:33.788069   10564 main.go:141] libmachine: [stdout =====>] : 
	I0722 02:14:33.788222   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:33.788222   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900\disk.vhd' -SizeBytes 20000MB
	I0722 02:14:36.365710   10564 main.go:141] libmachine: [stdout =====>] : 
	I0722 02:14:36.365710   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:36.366039   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM offline-docker-749900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0722 02:14:40.090625   10564 main.go:141] libmachine: [stdout =====>] : 
	Name                  State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                  ----- ----------- ----------------- ------   ------             -------
	offline-docker-749900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0722 02:14:40.090625   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:40.090846   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName offline-docker-749900 -DynamicMemoryEnabled $false
	I0722 02:14:42.440860   10564 main.go:141] libmachine: [stdout =====>] : 
	I0722 02:14:42.441166   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:42.441229   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor offline-docker-749900 -Count 2
	I0722 02:14:44.688885   10564 main.go:141] libmachine: [stdout =====>] : 
	I0722 02:14:44.688885   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:44.689064   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName offline-docker-749900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900\boot2docker.iso'
	I0722 02:14:47.314562   10564 main.go:141] libmachine: [stdout =====>] : 
	I0722 02:14:47.315538   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:47.315538   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName offline-docker-749900 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900\disk.vhd'
	I0722 02:14:50.001932   10564 main.go:141] libmachine: [stdout =====>] : 
	I0722 02:14:50.002290   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:50.002290   10564 main.go:141] libmachine: Starting VM...
	I0722 02:14:50.002428   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM offline-docker-749900
	I0722 02:14:53.190176   10564 main.go:141] libmachine: [stdout =====>] : 
	I0722 02:14:53.191030   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:53.191030   10564 main.go:141] libmachine: Waiting for host to start...
	I0722 02:14:53.191030   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:14:55.541300   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:14:55.541300   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:55.541681   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:14:58.155448   10564 main.go:141] libmachine: [stdout =====>] : 
	I0722 02:14:58.155448   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:14:59.169128   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:01.439092   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:01.439092   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:01.439874   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:15:04.105562   10564 main.go:141] libmachine: [stdout =====>] : 
	I0722 02:15:04.105562   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:05.108492   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:07.368643   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:07.368831   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:07.368831   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:15:09.954887   10564 main.go:141] libmachine: [stdout =====>] : 
	I0722 02:15:09.954887   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:10.955175   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:13.255435   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:13.255435   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:13.255871   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:15:15.824343   10564 main.go:141] libmachine: [stdout =====>] : 
	I0722 02:15:15.825427   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:16.832614   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:19.145429   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:19.146238   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:19.146300   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:15:21.760318   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:15:21.760318   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:21.760827   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:23.965112   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:23.965112   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:23.965386   10564 machine.go:94] provisionDockerMachine start ...
	I0722 02:15:23.965549   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:26.183842   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:26.183842   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:26.184454   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:15:28.795498   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:15:28.795498   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:28.800969   10564 main.go:141] libmachine: Using SSH client type: native
	I0722 02:15:28.814700   10564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.207.6 22 <nil> <nil>}
	I0722 02:15:28.814700   10564 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 02:15:28.939047   10564 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 02:15:28.939047   10564 buildroot.go:166] provisioning hostname "offline-docker-749900"
	I0722 02:15:28.939047   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:31.103089   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:31.103371   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:31.103444   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:15:33.684944   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:15:33.684944   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:33.691033   10564 main.go:141] libmachine: Using SSH client type: native
	I0722 02:15:33.691834   10564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.207.6 22 <nil> <nil>}
	I0722 02:15:33.691834   10564 main.go:141] libmachine: About to run SSH command:
	sudo hostname offline-docker-749900 && echo "offline-docker-749900" | sudo tee /etc/hostname
	I0722 02:15:33.841741   10564 main.go:141] libmachine: SSH cmd err, output: <nil>: offline-docker-749900
	
	I0722 02:15:33.841741   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:36.042580   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:36.042580   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:36.042980   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:15:38.621299   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:15:38.621385   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:38.625898   10564 main.go:141] libmachine: Using SSH client type: native
	I0722 02:15:38.626415   10564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.207.6 22 <nil> <nil>}
	I0722 02:15:38.626415   10564 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\soffline-docker-749900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-docker-749900/g' /etc/hosts;
				else 
					echo '127.0.1.1 offline-docker-749900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 02:15:38.762494   10564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 02:15:38.762494   10564 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0722 02:15:38.762494   10564 buildroot.go:174] setting up certificates
	I0722 02:15:38.762494   10564 provision.go:84] configureAuth start
	I0722 02:15:38.762494   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:40.937752   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:40.937752   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:40.937752   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:15:43.485077   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:15:43.485077   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:43.496573   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:45.604501   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:45.604501   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:45.616626   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:15:48.132198   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:15:48.143806   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:48.143806   10564 provision.go:143] copyHostCerts
	I0722 02:15:48.144288   10564 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0722 02:15:48.144288   10564 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0722 02:15:48.144773   10564 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0722 02:15:48.146257   10564 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0722 02:15:48.146257   10564 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0722 02:15:48.146581   10564 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0722 02:15:48.147805   10564 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0722 02:15:48.147805   10564 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0722 02:15:48.148342   10564 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0722 02:15:48.149131   10564 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.offline-docker-749900 san=[127.0.0.1 172.28.207.6 localhost minikube offline-docker-749900]
	I0722 02:15:48.548369   10564 provision.go:177] copyRemoteCerts
	I0722 02:15:48.558892   10564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 02:15:48.558892   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:50.725934   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:50.725934   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:50.736727   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:15:53.231585   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:15:53.231585   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:53.243580   10564 sshutil.go:53] new ssh client: &{IP:172.28.207.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900\id_rsa Username:docker}
	I0722 02:15:53.346429   10564 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7874791s)
	I0722 02:15:53.346501   10564 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 02:15:53.392616   10564 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0722 02:15:53.434239   10564 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 02:15:53.478137   10564 provision.go:87] duration metric: took 14.715465s to configureAuth
	I0722 02:15:53.478263   10564 buildroot.go:189] setting minikube options for container-runtime
	I0722 02:15:53.479023   10564 config.go:182] Loaded profile config "offline-docker-749900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 02:15:53.479023   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:15:55.572661   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:15:55.583416   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:55.583416   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:15:58.070522   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:15:58.070522   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:15:58.086897   10564 main.go:141] libmachine: Using SSH client type: native
	I0722 02:15:58.087437   10564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.207.6 22 <nil> <nil>}
	I0722 02:15:58.087437   10564 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 02:15:58.215818   10564 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 02:15:58.215818   10564 buildroot.go:70] root file system type: tmpfs
	I0722 02:15:58.216183   10564 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 02:15:58.216183   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:16:00.287073   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:16:00.287073   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:00.297322   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:16:02.794036   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:16:02.803935   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:02.810151   10564 main.go:141] libmachine: Using SSH client type: native
	I0722 02:16:02.810151   10564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.207.6 22 <nil> <nil>}
	I0722 02:16:02.810684   10564 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="HTTP_PROXY=172.16.1.1:1"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 02:16:02.963362   10564 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=HTTP_PROXY=172.16.1.1:1
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 02:16:02.963456   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:16:05.077987   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:16:05.077987   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:05.089225   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:16:07.650940   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:16:07.662190   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:07.667533   10564 main.go:141] libmachine: Using SSH client type: native
	I0722 02:16:07.668286   10564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.207.6 22 <nil> <nil>}
	I0722 02:16:07.668286   10564 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 02:16:09.906562   10564 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 02:16:09.906695   10564 machine.go:97] duration metric: took 45.9407509s to provisionDockerMachine
	I0722 02:16:09.906747   10564 client.go:171] duration metric: took 1m59.5932534s to LocalClient.Create
	I0722 02:16:09.906813   10564 start.go:167] duration metric: took 1m59.5933199s to libmachine.API.Create "offline-docker-749900"
	I0722 02:16:09.906879   10564 start.go:293] postStartSetup for "offline-docker-749900" (driver="hyperv")
	I0722 02:16:09.907018   10564 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 02:16:09.921575   10564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 02:16:09.921575   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:16:12.114654   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:16:12.125861   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:12.125861   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:16:14.719650   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:16:14.731423   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:14.731775   10564 sshutil.go:53] new ssh client: &{IP:172.28.207.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900\id_rsa Username:docker}
	I0722 02:16:14.837506   10564 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9157897s)
	I0722 02:16:14.849439   10564 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 02:16:14.859081   10564 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 02:16:14.859081   10564 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0722 02:16:14.859897   10564 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0722 02:16:14.861036   10564 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0722 02:16:14.872565   10564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 02:16:14.892418   10564 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0722 02:16:14.939375   10564 start.go:296] duration metric: took 5.0322964s for postStartSetup
	I0722 02:16:14.943126   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:16:17.126495   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:16:17.137836   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:17.137836   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:16:19.742184   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:16:19.742184   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:19.753495   10564 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\offline-docker-749900\config.json ...
	I0722 02:16:19.757015   10564 start.go:128] duration metric: took 2m9.4487887s to createHost
	I0722 02:16:19.757096   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:16:21.855606   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:16:21.855606   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:21.866411   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:16:24.370903   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:16:24.370903   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:24.377328   10564 main.go:141] libmachine: Using SSH client type: native
	I0722 02:16:24.377869   10564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.207.6 22 <nil> <nil>}
	I0722 02:16:24.377869   10564 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 02:16:24.504652   10564 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721614584.527228280
	
	I0722 02:16:24.504792   10564 fix.go:216] guest clock: 1721614584.527228280
	I0722 02:16:24.504792   10564 fix.go:229] Guest: 2024-07-22 02:16:24.52722828 +0000 UTC Remote: 2024-07-22 02:16:19.7570154 +0000 UTC m=+136.505135401 (delta=4.77021288s)
	I0722 02:16:24.504953   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:16:26.618339   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:16:26.618339   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:26.628970   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:16:29.175526   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:16:29.175526   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:29.185515   10564 main.go:141] libmachine: Using SSH client type: native
	I0722 02:16:29.186831   10564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.207.6 22 <nil> <nil>}
	I0722 02:16:29.186903   10564 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721614584
	I0722 02:16:29.344979   10564 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jul 22 02:16:24 UTC 2024
	
	I0722 02:16:29.345015   10564 fix.go:236] clock set: Mon Jul 22 02:16:24 UTC 2024
	 (err=<nil>)
	I0722 02:16:29.345060   10564 start.go:83] releasing machines lock for "offline-docker-749900", held for 2m19.0368051s
	I0722 02:16:29.345362   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:16:31.610069   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:16:31.615699   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:31.615699   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:16:34.233615   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:16:34.233615   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:34.237147   10564 out.go:177] * Found network options:
	I0722 02:16:34.240125   10564 out.go:177]   - HTTP_PROXY=172.16.1.1:1
	W0722 02:16:34.242351   10564 out.go:239] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.28.207.6).
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.28.207.6).
	I0722 02:16:34.246209   10564 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I0722 02:16:34.248837   10564 out.go:177]   - HTTP_PROXY=172.16.1.1:1
	I0722 02:16:34.254629   10564 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0722 02:16:34.254847   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:16:34.265387   10564 ssh_runner.go:195] Run: cat /version.json
	I0722 02:16:34.265387   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-749900 ).state
	I0722 02:16:36.557311   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:16:36.562353   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:36.562353   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:16:36.609256   10564 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 02:16:36.609256   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:36.609256   10564 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-749900 ).networkadapters[0]).ipaddresses[0]
	I0722 02:16:39.288525   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:16:39.288525   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:39.292936   10564 sshutil.go:53] new ssh client: &{IP:172.28.207.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900\id_rsa Username:docker}
	I0722 02:16:39.378824   10564 main.go:141] libmachine: [stdout =====>] : 172.28.207.6
	
	I0722 02:16:39.378866   10564 main.go:141] libmachine: [stderr =====>] : 
	I0722 02:16:39.378899   10564 sshutil.go:53] new ssh client: &{IP:172.28.207.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\offline-docker-749900\id_rsa Username:docker}
	I0722 02:16:39.387661   10564 ssh_runner.go:235] Completed: cat /version.json: (5.1222117s)
	I0722 02:16:39.400212   10564 ssh_runner.go:195] Run: systemctl --version
	I0722 02:16:39.425608   10564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 02:16:39.428254   10564 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1735623s)
	W0722 02:16:39.428254   10564 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	W0722 02:16:39.433818   10564 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 02:16:39.454029   10564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 02:16:39.485315   10564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 02:16:39.485315   10564 start.go:495] detecting cgroup driver to use...
	I0722 02:16:39.485315   10564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 02:16:39.533411   10564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 02:16:39.567662   10564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0722 02:16:39.579326   10564 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0722 02:16:39.579406   10564 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0722 02:16:39.596368   10564 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 02:16:39.609255   10564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 02:16:39.641939   10564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 02:16:39.673467   10564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 02:16:39.708089   10564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 02:16:39.743306   10564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 02:16:39.774698   10564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 02:16:39.809385   10564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 02:16:39.843098   10564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 02:16:39.883077   10564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 02:16:39.919965   10564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 02:16:39.958423   10564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 02:16:40.157845   10564 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 02:16:40.182088   10564 start.go:495] detecting cgroup driver to use...
	I0722 02:16:40.202643   10564 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 02:16:40.248270   10564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 02:16:40.288911   10564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 02:16:40.342409   10564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 02:16:40.379014   10564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 02:16:40.414021   10564 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 02:16:40.474375   10564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 02:16:40.499920   10564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 02:16:40.549982   10564 ssh_runner.go:195] Run: which cri-dockerd
	I0722 02:16:40.575251   10564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 02:16:40.593556   10564 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 02:16:40.644300   10564 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 02:16:40.838731   10564 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 02:16:41.015647   10564 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 02:16:41.015923   10564 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 02:16:41.062422   10564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 02:16:41.254278   10564 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 02:17:42.372893   10564 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1177445s)
	I0722 02:17:42.384487   10564 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0722 02:17:42.422415   10564 out.go:177] 
	W0722 02:17:42.424064   10564 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 02:16:08 offline-docker-749900 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 02:16:08 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:08.275591996Z" level=info msg="Starting up"
	Jul 22 02:16:08 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:08.278873309Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 02:16:08 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:08.280024684Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.316898679Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.343954137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.343998740Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344062644Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344078945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344161250Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344253356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344591878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344684984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344706586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344717387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344830194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.345145414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.348271017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.348376224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.348752049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.348870156Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.349002865Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.349140774Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.375872710Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376021020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376047022Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376064823Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376081024Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376207432Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376835773Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377119091Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377203697Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377230799Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377249800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377264301Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377277302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377293503Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377311604Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377327905Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377341406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377353307Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377374108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377388409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377405110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377419211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377432312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377485715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377502516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377516117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377535818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377558520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377571921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377586522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377602823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377618324Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377643725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377657626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377673627Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377758333Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377850239Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377868040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377882041Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377896642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377917843Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377930644Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.378324870Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.378571886Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.378679193Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.378740697Z" level=info msg="containerd successfully booted in 0.063062s"
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.354953691Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.396954549Z" level=info msg="Loading containers: start."
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.581606894Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.788424026Z" level=info msg="Loading containers: done."
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.816397572Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.816869701Z" level=info msg="Daemon has completed initialization"
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.926792864Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 02:16:09 offline-docker-749900 systemd[1]: Started Docker Application Container Engine.
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.927206890Z" level=info msg="API listen on [::]:2376"
	Jul 22 02:16:41 offline-docker-749900 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 02:16:41 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:41.305826525Z" level=info msg="Processing signal 'terminated'"
	Jul 22 02:16:41 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:41.307189927Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 02:16:41 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:41.308409129Z" level=info msg="Daemon shutdown complete"
	Jul 22 02:16:41 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:41.308572629Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 02:16:41 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:41.308651830Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 02:16:42 offline-docker-749900 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 02:16:42 offline-docker-749900 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 02:16:42 offline-docker-749900 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 02:16:42 offline-docker-749900 dockerd[1072]: time="2024-07-22T02:16:42.370152951Z" level=info msg="Starting up"
	Jul 22 02:17:42 offline-docker-749900 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 02:17:42 offline-docker-749900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 02:17:42 offline-docker-749900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 02:17:42 offline-docker-749900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 02:16:08 offline-docker-749900 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 02:16:08 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:08.275591996Z" level=info msg="Starting up"
	Jul 22 02:16:08 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:08.278873309Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 02:16:08 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:08.280024684Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=669
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.316898679Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.343954137Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.343998740Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344062644Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344078945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344161250Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344253356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344591878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344684984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344706586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344717387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.344830194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.345145414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.348271017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.348376224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.348752049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.348870156Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.349002865Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.349140774Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.375872710Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376021020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376047022Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376064823Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376081024Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376207432Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.376835773Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377119091Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377203697Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377230799Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377249800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377264301Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377277302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377293503Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377311604Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377327905Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377341406Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377353307Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377374108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377388409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377405110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377419211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377432312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377485715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377502516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377516117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377535818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377558520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377571921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377586522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377602823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377618324Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377643725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377657626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377673627Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377758333Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377850239Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377868040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377882041Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377896642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377917843Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.377930644Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.378324870Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.378571886Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.378679193Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 02:16:08 offline-docker-749900 dockerd[669]: time="2024-07-22T02:16:08.378740697Z" level=info msg="containerd successfully booted in 0.063062s"
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.354953691Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.396954549Z" level=info msg="Loading containers: start."
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.581606894Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.788424026Z" level=info msg="Loading containers: done."
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.816397572Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.816869701Z" level=info msg="Daemon has completed initialization"
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.926792864Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 02:16:09 offline-docker-749900 systemd[1]: Started Docker Application Container Engine.
	Jul 22 02:16:09 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:09.927206890Z" level=info msg="API listen on [::]:2376"
	Jul 22 02:16:41 offline-docker-749900 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 02:16:41 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:41.305826525Z" level=info msg="Processing signal 'terminated'"
	Jul 22 02:16:41 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:41.307189927Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 02:16:41 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:41.308409129Z" level=info msg="Daemon shutdown complete"
	Jul 22 02:16:41 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:41.308572629Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 02:16:41 offline-docker-749900 dockerd[663]: time="2024-07-22T02:16:41.308651830Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 02:16:42 offline-docker-749900 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 02:16:42 offline-docker-749900 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 02:16:42 offline-docker-749900 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 02:16:42 offline-docker-749900 dockerd[1072]: time="2024-07-22T02:16:42.370152951Z" level=info msg="Starting up"
	Jul 22 02:17:42 offline-docker-749900 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 02:17:42 offline-docker-749900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 02:17:42 offline-docker-749900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 02:17:42 offline-docker-749900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0722 02:17:42.425856   10564 out.go:239] * 
	* 
	W0722 02:17:42.427816   10564 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 02:17:42.430496   10564 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-windows-amd64.exe start -p offline-docker-749900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv failed: exit status 90
panic.go:626: *** TestOffline FAILED at 2024-07-22 02:17:42.7003973 +0000 UTC m=+10324.857524001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-749900 -n offline-docker-749900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-749900 -n offline-docker-749900: exit status 6 (12.3890024s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 02:17:42.821616    2112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0722 02:17:55.031331    2112 status.go:417] kubeconfig endpoint: get endpoint: "offline-docker-749900" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "offline-docker-749900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "offline-docker-749900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-749900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-749900: (1m2.5202304s)
--- FAIL: TestOffline (294.39s)

                                                
                                    
x
+
TestAddons/parallel/Registry (75.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 30.3657ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-7mxg8" [46398a1d-dacc-4292-8984-e35ae91f0e91] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0231701s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lnbd6" [eb5100be-763d-433b-8160-6551a6e6c4ed] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0169545s
addons_test.go:342: (dbg) Run:  kubectl --context addons-979300 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-979300 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-979300 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.2893608s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 ip: (3.3784991s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0721 23:34:29.609433   12488 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-979300 ip"
2024/07/21 23:34:32 [DEBUG] GET http://172.28.202.6:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 addons disable registry --alsologtostderr -v=1: (15.6277589s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-979300 -n addons-979300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-979300 -n addons-979300: (13.2959959s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 logs -n 25: (10.1412592s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-823800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | -p download-only-823800              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-823800              | download-only-823800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:26 UTC |
	| start   | -o=json --download-only              | download-only-451800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC |                     |
	|         | -p download-only-451800              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC | 21 Jul 24 23:26 UTC |
	| delete  | -p download-only-451800              | download-only-451800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC | 21 Jul 24 23:26 UTC |
	| start   | -o=json --download-only              | download-only-258200 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC |                     |
	|         | -p download-only-258200              |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr            |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0  |                      |                   |         |                     |                     |
	|         | --container-runtime=docker           |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | --all                                | minikube             | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC | 21 Jul 24 23:26 UTC |
	| delete  | -p download-only-258200              | download-only-258200 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC | 21 Jul 24 23:26 UTC |
	| delete  | -p download-only-823800              | download-only-823800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC | 21 Jul 24 23:26 UTC |
	| delete  | -p download-only-451800              | download-only-451800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC | 21 Jul 24 23:26 UTC |
	| delete  | -p download-only-258200              | download-only-258200 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC | 21 Jul 24 23:26 UTC |
	| start   | --download-only -p                   | binary-mirror-264300 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC |                     |
	|         | binary-mirror-264300                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr                    |                      |                   |         |                     |                     |
	|         | --binary-mirror                      |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:51198               |                      |                   |         |                     |                     |
	|         | --driver=hyperv                      |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-264300              | binary-mirror-264300 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC | 21 Jul 24 23:26 UTC |
	| addons  | disable dashboard -p                 | addons-979300        | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC |                     |
	|         | addons-979300                        |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-979300        | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC |                     |
	|         | addons-979300                        |                      |                   |         |                     |                     |
	| start   | -p addons-979300 --wait=true         | addons-979300        | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC | 21 Jul 24 23:34 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --addons=registry                    |                      |                   |         |                     |                     |
	|         | --addons=metrics-server              |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress     |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |                   |         |                     |                     |
	| addons  | enable headlamp                      | addons-979300        | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:34 UTC | 21 Jul 24 23:34 UTC |
	|         | -p addons-979300                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-979300        | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:34 UTC | 21 Jul 24 23:34 UTC |
	|         | -p addons-979300                     |                      |                   |         |                     |                     |
	| ip      | addons-979300 ip                     | addons-979300        | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:34 UTC | 21 Jul 24 23:34 UTC |
	| addons  | addons-979300 addons disable         | addons-979300        | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:34 UTC | 21 Jul 24 23:34 UTC |
	|         | registry --alsologtostderr           |                      |                   |         |                     |                     |
	|         | -v=1                                 |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-979300        | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:34 UTC |                     |
	|         | addons-979300                        |                      |                   |         |                     |                     |
	|---------|--------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:26:44
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:26:44.209534    4176 out.go:291] Setting OutFile to fd 592 ...
	I0721 23:26:44.210851    4176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:26:44.210851    4176 out.go:304] Setting ErrFile to fd 580...
	I0721 23:26:44.210851    4176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:26:44.231844    4176 out.go:298] Setting JSON to false
	I0721 23:26:44.234947    4176 start.go:129] hostinfo: {"hostname":"minikube6","uptime":119611,"bootTime":1721484792,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0721 23:26:44.235118    4176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 23:26:44.239343    4176 out.go:177] * [addons-979300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0721 23:26:44.243504    4176 notify.go:220] Checking for updates...
	I0721 23:26:44.245610    4176 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0721 23:26:44.248509    4176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:26:44.251277    4176 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0721 23:26:44.253219    4176 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:26:44.255686    4176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:26:44.262505    4176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:26:49.873316    4176 out.go:177] * Using the hyperv driver based on user configuration
	I0721 23:26:49.877906    4176 start.go:297] selected driver: hyperv
	I0721 23:26:49.877906    4176 start.go:901] validating driver "hyperv" against <nil>
	I0721 23:26:49.877906    4176 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:26:49.929394    4176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 23:26:49.930528    4176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:26:49.930528    4176 cni.go:84] Creating CNI manager for ""
	I0721 23:26:49.930528    4176 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 23:26:49.930528    4176 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 23:26:49.931107    4176 start.go:340] cluster config:
	{Name:addons-979300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-979300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:26:49.931250    4176 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:26:49.935984    4176 out.go:177] * Starting "addons-979300" primary control-plane node in "addons-979300" cluster
	I0721 23:26:49.942008    4176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 23:26:49.942059    4176 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 23:26:49.942059    4176 cache.go:56] Caching tarball of preloaded images
	I0721 23:26:49.942059    4176 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 23:26:49.942640    4176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 23:26:49.942777    4176 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\config.json ...
	I0721 23:26:49.943507    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\config.json: {Name:mkb22afa754ca3348a1324c32eea56362aa0e3f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:26:49.944244    4176 start.go:360] acquireMachinesLock for addons-979300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:26:49.944244    4176 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-979300"
	I0721 23:26:49.944859    4176 start.go:93] Provisioning new machine with config: &{Name:addons-979300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:addons-979300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 23:26:49.944859    4176 start.go:125] createHost starting for "" (driver="hyperv")
	I0721 23:26:49.946811    4176 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0721 23:26:49.947866    4176 start.go:159] libmachine.API.Create for "addons-979300" (driver="hyperv")
	I0721 23:26:49.947866    4176 client.go:168] LocalClient.Create starting
	I0721 23:26:49.947866    4176 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0721 23:26:50.218509    4176 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0721 23:26:50.401685    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0721 23:26:52.555055    4176 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0721 23:26:52.555273    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:26:52.555330    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0721 23:26:54.357568    4176 main.go:141] libmachine: [stdout =====>] : False
	
	I0721 23:26:54.357568    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:26:54.357568    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0721 23:26:55.874173    4176 main.go:141] libmachine: [stdout =====>] : True
	
	I0721 23:26:55.874173    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:26:55.874938    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0721 23:26:59.675030    4176 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0721 23:26:59.675729    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:26:59.677845    4176 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0721 23:27:00.117774    4176 main.go:141] libmachine: Creating SSH key...
	I0721 23:27:00.216557    4176 main.go:141] libmachine: Creating VM...
	I0721 23:27:00.216557    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0721 23:27:03.068381    4176 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0721 23:27:03.068381    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:03.069184    4176 main.go:141] libmachine: Using switch "Default Switch"
	I0721 23:27:03.069326    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0721 23:27:04.838312    4176 main.go:141] libmachine: [stdout =====>] : True
	
	I0721 23:27:04.839065    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:04.839065    4176 main.go:141] libmachine: Creating VHD
	I0721 23:27:04.839065    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0721 23:27:08.669912    4176 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3AA8856B-AF0F-470A-AB0D-FE0C85402B84
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0721 23:27:08.669993    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:08.669993    4176 main.go:141] libmachine: Writing magic tar header
	I0721 23:27:08.670248    4176 main.go:141] libmachine: Writing SSH key tar header
	I0721 23:27:08.680135    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0721 23:27:11.975116    4176 main.go:141] libmachine: [stdout =====>] : 
	I0721 23:27:11.975430    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:11.975533    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\disk.vhd' -SizeBytes 20000MB
	I0721 23:27:14.546987    4176 main.go:141] libmachine: [stdout =====>] : 
	I0721 23:27:14.548111    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:14.548111    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-979300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0721 23:27:18.278123    4176 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-979300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0721 23:27:18.278123    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:18.278123    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-979300 -DynamicMemoryEnabled $false
	I0721 23:27:20.575844    4176 main.go:141] libmachine: [stdout =====>] : 
	I0721 23:27:20.576977    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:20.576977    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-979300 -Count 2
	I0721 23:27:22.800200    4176 main.go:141] libmachine: [stdout =====>] : 
	I0721 23:27:22.800200    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:22.801269    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-979300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\boot2docker.iso'
	I0721 23:27:25.407647    4176 main.go:141] libmachine: [stdout =====>] : 
	I0721 23:27:25.407647    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:25.407647    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-979300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\disk.vhd'
	I0721 23:27:28.176571    4176 main.go:141] libmachine: [stdout =====>] : 
	I0721 23:27:28.176571    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:28.176571    4176 main.go:141] libmachine: Starting VM...
	I0721 23:27:28.176571    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-979300
	I0721 23:27:31.382532    4176 main.go:141] libmachine: [stdout =====>] : 
	I0721 23:27:31.382616    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:31.382616    4176 main.go:141] libmachine: Waiting for host to start...
	I0721 23:27:31.382616    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:27:33.695507    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:27:33.695507    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:33.695757    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:27:36.219418    4176 main.go:141] libmachine: [stdout =====>] : 
	I0721 23:27:36.219418    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:37.223098    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:27:39.471494    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:27:39.471494    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:39.471494    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:27:42.070510    4176 main.go:141] libmachine: [stdout =====>] : 
	I0721 23:27:42.070510    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:43.075258    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:27:45.313233    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:27:45.313233    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:45.314105    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:27:47.892083    4176 main.go:141] libmachine: [stdout =====>] : 
	I0721 23:27:47.892083    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:48.907908    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:27:51.219143    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:27:51.219143    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:51.219241    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:27:53.827388    4176 main.go:141] libmachine: [stdout =====>] : 
	I0721 23:27:53.827388    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:54.834016    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:27:57.128744    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:27:57.129043    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:57.129043    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:27:59.716206    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:27:59.716444    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:27:59.716444    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:01.901901    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:01.901901    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:01.901901    4176 machine.go:94] provisionDockerMachine start ...
	I0721 23:28:01.901901    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:04.101814    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:04.102270    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:04.102270    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:28:06.647157    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:28:06.647157    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:06.653377    4176 main.go:141] libmachine: Using SSH client type: native
	I0721 23:28:06.665353    4176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.202.6 22 <nil> <nil>}
	I0721 23:28:06.665353    4176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0721 23:28:06.792862    4176 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0721 23:28:06.793023    4176 buildroot.go:166] provisioning hostname "addons-979300"
	I0721 23:28:06.793112    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:09.067497    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:09.067497    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:09.068407    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:28:11.759974    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:28:11.760046    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:11.765403    4176 main.go:141] libmachine: Using SSH client type: native
	I0721 23:28:11.765977    4176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.202.6 22 <nil> <nil>}
	I0721 23:28:11.766121    4176 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-979300 && echo "addons-979300" | sudo tee /etc/hostname
	I0721 23:28:11.932621    4176 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-979300
	
	I0721 23:28:11.932730    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:14.204306    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:14.204306    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:14.204991    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:28:16.917244    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:28:16.917244    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:16.923823    4176 main.go:141] libmachine: Using SSH client type: native
	I0721 23:28:16.924664    4176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.202.6 22 <nil> <nil>}
	I0721 23:28:16.924664    4176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-979300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-979300/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-979300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:28:17.073267    4176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:28:17.073267    4176 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0721 23:28:17.073267    4176 buildroot.go:174] setting up certificates
	I0721 23:28:17.073267    4176 provision.go:84] configureAuth start
	I0721 23:28:17.073267    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:19.389553    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:19.389553    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:19.390285    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:28:22.083700    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:28:22.084631    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:22.084833    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:24.260430    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:24.260430    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:24.260785    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:28:26.815474    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:28:26.815474    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:26.815474    4176 provision.go:143] copyHostCerts
	I0721 23:28:26.816006    4176 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0721 23:28:26.817520    4176 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0721 23:28:26.819065    4176 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0721 23:28:26.819942    4176 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-979300 san=[127.0.0.1 172.28.202.6 addons-979300 localhost minikube]
	I0721 23:28:27.155944    4176 provision.go:177] copyRemoteCerts
	I0721 23:28:27.177465    4176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:28:27.177465    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:29.362398    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:29.362398    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:29.362398    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:28:31.951589    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:28:31.952251    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:31.952605    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:28:32.061807    4176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8842787s)
	I0721 23:28:32.061807    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0721 23:28:32.107841    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0721 23:28:32.157278    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0721 23:28:32.201513    4176 provision.go:87] duration metric: took 15.1280508s to configureAuth
	I0721 23:28:32.201513    4176 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:28:32.202418    4176 config.go:182] Loaded profile config "addons-979300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:28:32.202418    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:34.379140    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:34.379995    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:34.380400    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:28:36.937720    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:28:36.938527    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:36.944344    4176 main.go:141] libmachine: Using SSH client type: native
	I0721 23:28:36.945053    4176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.202.6 22 <nil> <nil>}
	I0721 23:28:36.945053    4176 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0721 23:28:37.082420    4176 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0721 23:28:37.082420    4176 buildroot.go:70] root file system type: tmpfs
	I0721 23:28:37.082420    4176 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0721 23:28:37.083133    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:39.246863    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:39.246863    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:39.246863    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:28:41.814492    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:28:41.814492    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:41.820831    4176 main.go:141] libmachine: Using SSH client type: native
	I0721 23:28:41.821316    4176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.202.6 22 <nil> <nil>}
	I0721 23:28:41.821425    4176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0721 23:28:41.978237    4176 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0721 23:28:41.978268    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:44.200952    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:44.200952    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:44.200952    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:28:46.877630    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:28:46.877630    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:46.884244    4176 main.go:141] libmachine: Using SSH client type: native
	I0721 23:28:46.884244    4176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.202.6 22 <nil> <nil>}
	I0721 23:28:46.884244    4176 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0721 23:28:49.137751    4176 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0721 23:28:49.137751    4176 machine.go:97] duration metric: took 47.2352415s to provisionDockerMachine
	I0721 23:28:49.137751    4176 client.go:171] duration metric: took 1m59.1883424s to LocalClient.Create
	I0721 23:28:49.137751    4176 start.go:167] duration metric: took 1m59.1883424s to libmachine.API.Create "addons-979300"
	I0721 23:28:49.137751    4176 start.go:293] postStartSetup for "addons-979300" (driver="hyperv")
	I0721 23:28:49.137751    4176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:28:49.148732    4176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:28:49.149740    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:51.334178    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:51.334178    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:51.334348    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:28:53.913185    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:28:53.913490    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:53.913490    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:28:54.023830    4176 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8739687s)
	I0721 23:28:54.037411    4176 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:28:54.044491    4176 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:28:54.044491    4176 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0721 23:28:54.044786    4176 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0721 23:28:54.045354    4176 start.go:296] duration metric: took 4.9075404s for postStartSetup
	I0721 23:28:54.049172    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:28:56.264624    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:28:56.264840    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:56.264840    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:28:58.817065    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:28:58.817065    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:28:58.818138    4176 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\config.json ...
	I0721 23:28:58.821614    4176 start.go:128] duration metric: took 2m8.8750897s to createHost
	I0721 23:28:58.821728    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:29:01.002326    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:29:01.002666    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:29:01.002774    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:29:03.578230    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:29:03.578230    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:29:03.584186    4176 main.go:141] libmachine: Using SSH client type: native
	I0721 23:29:03.584548    4176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.202.6 22 <nil> <nil>}
	I0721 23:29:03.584548    4176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:29:03.721701    4176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721604543.748988470
	
	I0721 23:29:03.721892    4176 fix.go:216] guest clock: 1721604543.748988470
	I0721 23:29:03.721892    4176 fix.go:229] Guest: 2024-07-21 23:29:03.74898847 +0000 UTC Remote: 2024-07-21 23:28:58.8217281 +0000 UTC m=+134.777178401 (delta=4.92726037s)
	I0721 23:29:03.722219    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:29:05.899352    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:29:05.899352    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:29:05.900321    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:29:08.468165    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:29:08.468165    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:29:08.475074    4176 main.go:141] libmachine: Using SSH client type: native
	I0721 23:29:08.475216    4176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.202.6 22 <nil> <nil>}
	I0721 23:29:08.475216    4176 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721604543
	I0721 23:29:08.620842    4176 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Jul 21 23:29:03 UTC 2024
	
	I0721 23:29:08.620842    4176 fix.go:236] clock set: Sun Jul 21 23:29:03 UTC 2024
	 (err=<nil>)
	I0721 23:29:08.620842    4176 start.go:83] releasing machines lock for "addons-979300", held for 2m18.6742726s
	I0721 23:29:08.621426    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:29:10.811226    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:29:10.811226    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:29:10.812232    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:29:13.386186    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:29:13.386186    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:29:13.391569    4176 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0721 23:29:13.391569    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:29:13.404944    4176 ssh_runner.go:195] Run: cat /version.json
	I0721 23:29:13.404944    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:29:15.645169    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:29:15.645169    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:29:15.645876    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:29:15.645876    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:29:15.645876    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:29:15.645876    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:29:18.335481    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:29:18.335481    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:29:18.336280    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:29:18.368023    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:29:18.368417    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:29:18.369092    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:29:18.438006    4176 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0463728s)
	W0721 23:29:18.438192    4176 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0721 23:29:18.456899    4176 ssh_runner.go:235] Completed: cat /version.json: (5.0518899s)
	I0721 23:29:18.471224    4176 ssh_runner.go:195] Run: systemctl --version
	I0721 23:29:18.496174    4176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0721 23:29:18.504397    4176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:29:18.516941    4176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0721 23:29:18.546442    4176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0721 23:29:18.546442    4176 start.go:495] detecting cgroup driver to use...
	I0721 23:29:18.546442    4176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:29:18.593232    4176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0721 23:29:18.623147    4176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0721 23:29:18.646742    4176 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0721 23:29:18.658217    4176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0721 23:29:18.690500    4176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:29:18.721278    4176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	W0721 23:29:18.727925    4176 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0721 23:29:18.727925    4176 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0721 23:29:18.754086    4176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:29:18.785828    4176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:29:18.816989    4176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0721 23:29:18.849124    4176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0721 23:29:18.882914    4176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0721 23:29:18.913514    4176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:29:18.945218    4176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:29:18.977476    4176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:29:19.180861    4176 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0721 23:29:19.210583    4176 start.go:495] detecting cgroup driver to use...
	I0721 23:29:19.224079    4176 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0721 23:29:19.264679    4176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:29:19.302163    4176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:29:19.345855    4176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:29:19.387827    4176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 23:29:19.427360    4176 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0721 23:29:19.494235    4176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 23:29:19.518044    4176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:29:19.568265    4176 ssh_runner.go:195] Run: which cri-dockerd
	I0721 23:29:19.586385    4176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0721 23:29:19.607244    4176 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0721 23:29:19.649091    4176 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0721 23:29:19.861722    4176 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0721 23:29:20.050976    4176 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0721 23:29:20.051358    4176 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0721 23:29:20.100051    4176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:29:20.295904    4176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 23:29:22.886785    4176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5908479s)
	I0721 23:29:22.900249    4176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0721 23:29:22.936555    4176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0721 23:29:22.974434    4176 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0721 23:29:23.175016    4176 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0721 23:29:23.386424    4176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:29:23.603598    4176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0721 23:29:23.642900    4176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0721 23:29:23.676454    4176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:29:23.876580    4176 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0721 23:29:23.980388    4176 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0721 23:29:23.993609    4176 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0721 23:29:24.002513    4176 start.go:563] Will wait 60s for crictl version
	I0721 23:29:24.013019    4176 ssh_runner.go:195] Run: which crictl
	I0721 23:29:24.030275    4176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0721 23:29:24.091516    4176 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0721 23:29:24.101880    4176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0721 23:29:24.145389    4176 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0721 23:29:24.179470    4176 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0721 23:29:24.179724    4176 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0721 23:29:24.184481    4176 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0721 23:29:24.184584    4176 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0721 23:29:24.184584    4176 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0721 23:29:24.184584    4176 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e8:0a:ec Flags:up|broadcast|multicast|running}
	I0721 23:29:24.187965    4176 ip.go:210] interface addr: fe80::cedd:59ec:4db2:d0bf/64
	I0721 23:29:24.188015    4176 ip.go:210] interface addr: 172.28.192.1/20
	I0721 23:29:24.201255    4176 ssh_runner.go:195] Run: grep 172.28.192.1	host.minikube.internal$ /etc/hosts
	I0721 23:29:24.206122    4176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:29:24.230076    4176 kubeadm.go:883] updating cluster {Name:addons-979300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:addons-979300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.202.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0721 23:29:24.230304    4176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 23:29:24.240024    4176 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0721 23:29:24.261798    4176 docker.go:685] Got preloaded images: 
	I0721 23:29:24.261798    4176 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0721 23:29:24.272967    4176 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0721 23:29:24.303213    4176 ssh_runner.go:195] Run: which lz4
	I0721 23:29:24.323620    4176 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0721 23:29:24.328623    4176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0721 23:29:24.328623    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0721 23:29:26.259154    4176 docker.go:649] duration metric: took 1.9478094s to copy over tarball
	I0721 23:29:26.270805    4176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0721 23:29:31.755196    4176 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.4832074s)
	I0721 23:29:31.755317    4176 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0721 23:29:31.820858    4176 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0721 23:29:31.840407    4176 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0721 23:29:31.881517    4176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:29:32.079653    4176 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 23:29:37.887388    4176 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.8076608s)
	I0721 23:29:37.895791    4176 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0721 23:29:37.924214    4176 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0721 23:29:37.924378    4176 cache_images.go:84] Images are preloaded, skipping loading
	I0721 23:29:37.924446    4176 kubeadm.go:934] updating node { 172.28.202.6 8443 v1.30.3 docker true true} ...
	I0721 23:29:37.924793    4176 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-979300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.202.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-979300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0721 23:29:37.933281    4176 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0721 23:29:37.974502    4176 cni.go:84] Creating CNI manager for ""
	I0721 23:29:37.974502    4176 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 23:29:37.974502    4176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0721 23:29:37.974502    4176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.202.6 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-979300 NodeName:addons-979300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.202.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.202.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0721 23:29:37.974502    4176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.202.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-979300"
	  kubeletExtraArgs:
	    node-ip: 172.28.202.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.202.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0721 23:29:37.985540    4176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0721 23:29:38.002619    4176 binaries.go:44] Found k8s binaries, skipping transfer
	I0721 23:29:38.014733    4176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0721 23:29:38.031456    4176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0721 23:29:38.062659    4176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0721 23:29:38.092828    4176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0721 23:29:38.133618    4176 ssh_runner.go:195] Run: grep 172.28.202.6	control-plane.minikube.internal$ /etc/hosts
	I0721 23:29:38.139270    4176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.202.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0721 23:29:38.169120    4176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:29:38.362988    4176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:29:38.391699    4176 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300 for IP: 172.28.202.6
	I0721 23:29:38.391782    4176 certs.go:194] generating shared ca certs ...
	I0721 23:29:38.391885    4176 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:29:38.392421    4176 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0721 23:29:38.542869    4176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0721 23:29:38.542869    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:29:38.545132    4176 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0721 23:29:38.545132    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:29:38.546584    4176 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0721 23:29:39.365432    4176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0721 23:29:39.365432    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:29:39.367314    4176 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0721 23:29:39.367314    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:29:39.367839    4176 certs.go:256] generating profile certs ...
	I0721 23:29:39.368949    4176 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.key
	I0721 23:29:39.368949    4176 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt with IP's: []
	I0721 23:29:39.659760    4176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt ...
	I0721 23:29:39.659760    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: {Name:mkba722e62ef6f4551c83246ed39a8ed8054c9d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:29:39.661296    4176 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.key ...
	I0721 23:29:39.661419    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.key: {Name:mk863fcf0e213e446e012ae7d90e12715f2b1892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:29:39.662758    4176 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.key.e7f60b5a
	I0721 23:29:39.662986    4176 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.crt.e7f60b5a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.202.6]
	I0721 23:29:39.757415    4176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.crt.e7f60b5a ...
	I0721 23:29:39.757415    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.crt.e7f60b5a: {Name:mk6c7431f96bc8b9a788a98f346c94f296533648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:29:39.758449    4176 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.key.e7f60b5a ...
	I0721 23:29:39.758449    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.key.e7f60b5a: {Name:mk786e00907b5353950044aa589dab519d1374cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:29:39.759218    4176 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.crt.e7f60b5a -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.crt
	I0721 23:29:39.769317    4176 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.key.e7f60b5a -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.key
	I0721 23:29:39.770417    4176 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\proxy-client.key
	I0721 23:29:39.770417    4176 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\proxy-client.crt with IP's: []
	I0721 23:29:40.079505    4176 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\proxy-client.crt ...
	I0721 23:29:40.079505    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\proxy-client.crt: {Name:mk3d43f27c3ad83d0b8961b9934a375c12d38c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:29:40.081501    4176 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\proxy-client.key ...
	I0721 23:29:40.081501    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\proxy-client.key: {Name:mkfedd7d45b74b5ebcd6469971cfceaa5e02f8cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:29:40.093490    4176 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0721 23:29:40.094489    4176 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0721 23:29:40.094722    4176 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0721 23:29:40.094997    4176 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0721 23:29:40.096796    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0721 23:29:40.146758    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0721 23:29:40.195970    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0721 23:29:40.240960    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0721 23:29:40.284845    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0721 23:29:40.330869    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0721 23:29:40.383755    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0721 23:29:40.428239    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0721 23:29:40.474058    4176 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0721 23:29:40.520683    4176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0721 23:29:40.564409    4176 ssh_runner.go:195] Run: openssl version
	I0721 23:29:40.584307    4176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0721 23:29:40.612214    4176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:29:40.619279    4176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:29 /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:29:40.630631    4176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0721 23:29:40.649940    4176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0721 23:29:40.681590    4176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0721 23:29:40.688036    4176 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0721 23:29:40.688488    4176 kubeadm.go:392] StartCluster: {Name:addons-979300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:addons-979300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.202.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:29:40.696138    4176 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0721 23:29:40.732393    4176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0721 23:29:40.760289    4176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0721 23:29:40.788314    4176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0721 23:29:40.804999    4176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0721 23:29:40.804999    4176 kubeadm.go:157] found existing configuration files:
	
	I0721 23:29:40.815480    4176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0721 23:29:40.830962    4176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0721 23:29:40.844273    4176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0721 23:29:40.872002    4176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0721 23:29:40.887771    4176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0721 23:29:40.898318    4176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0721 23:29:40.926415    4176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0721 23:29:40.942572    4176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0721 23:29:40.955050    4176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0721 23:29:40.983046    4176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0721 23:29:41.000114    4176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0721 23:29:41.011981    4176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0721 23:29:41.034246    4176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0721 23:29:41.109187    4176 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0721 23:29:41.109484    4176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0721 23:29:41.275661    4176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0721 23:29:41.275661    4176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0721 23:29:41.276292    4176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0721 23:29:41.570224    4176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0721 23:29:41.573577    4176 out.go:204]   - Generating certificates and keys ...
	I0721 23:29:41.573828    4176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0721 23:29:41.574122    4176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0721 23:29:42.050034    4176 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0721 23:29:42.251939    4176 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0721 23:29:42.331237    4176 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0721 23:29:42.430196    4176 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0721 23:29:42.525971    4176 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0721 23:29:42.526344    4176 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-979300 localhost] and IPs [172.28.202.6 127.0.0.1 ::1]
	I0721 23:29:42.902196    4176 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0721 23:29:42.902196    4176 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-979300 localhost] and IPs [172.28.202.6 127.0.0.1 ::1]
	I0721 23:29:43.349933    4176 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0721 23:29:43.585219    4176 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0721 23:29:43.865442    4176 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0721 23:29:43.865846    4176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0721 23:29:43.949876    4176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0721 23:29:44.039591    4176 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0721 23:29:44.204418    4176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0721 23:29:44.340676    4176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0721 23:29:44.504318    4176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0721 23:29:44.509483    4176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0721 23:29:44.513895    4176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0721 23:29:44.517949    4176 out.go:204]   - Booting up control plane ...
	I0721 23:29:44.518273    4176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0721 23:29:44.518619    4176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0721 23:29:44.520669    4176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0721 23:29:44.549803    4176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0721 23:29:44.550320    4176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0721 23:29:44.550446    4176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0721 23:29:44.749856    4176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0721 23:29:44.750311    4176 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0721 23:29:45.257312    4176 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 508.274973ms
	I0721 23:29:45.257413    4176 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0721 23:29:52.262071    4176 kubeadm.go:310] [api-check] The API server is healthy after 7.004598369s
	I0721 23:29:52.281353    4176 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0721 23:29:52.320984    4176 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0721 23:29:52.371721    4176 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0721 23:29:52.372787    4176 kubeadm.go:310] [mark-control-plane] Marking the node addons-979300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0721 23:29:52.387082    4176 kubeadm.go:310] [bootstrap-token] Using token: 893dpx.hrs96aj3ae8jouze
	I0721 23:29:52.391339    4176 out.go:204]   - Configuring RBAC rules ...
	I0721 23:29:52.391755    4176 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0721 23:29:52.398942    4176 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0721 23:29:52.414175    4176 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0721 23:29:52.430570    4176 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0721 23:29:52.436755    4176 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0721 23:29:52.449518    4176 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0721 23:29:52.681509    4176 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0721 23:29:53.144422    4176 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0721 23:29:53.677231    4176 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0721 23:29:53.678010    4176 kubeadm.go:310] 
	I0721 23:29:53.678845    4176 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0721 23:29:53.678845    4176 kubeadm.go:310] 
	I0721 23:29:53.678845    4176 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0721 23:29:53.678845    4176 kubeadm.go:310] 
	I0721 23:29:53.678845    4176 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0721 23:29:53.679438    4176 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0721 23:29:53.679438    4176 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0721 23:29:53.679679    4176 kubeadm.go:310] 
	I0721 23:29:53.679811    4176 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0721 23:29:53.679847    4176 kubeadm.go:310] 
	I0721 23:29:53.679945    4176 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0721 23:29:53.679980    4176 kubeadm.go:310] 
	I0721 23:29:53.680037    4176 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0721 23:29:53.680037    4176 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0721 23:29:53.680037    4176 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0721 23:29:53.680037    4176 kubeadm.go:310] 
	I0721 23:29:53.680037    4176 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0721 23:29:53.680571    4176 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0721 23:29:53.680893    4176 kubeadm.go:310] 
	I0721 23:29:53.681126    4176 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 893dpx.hrs96aj3ae8jouze \
	I0721 23:29:53.681251    4176 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b \
	I0721 23:29:53.681251    4176 kubeadm.go:310] 	--control-plane 
	I0721 23:29:53.681251    4176 kubeadm.go:310] 
	I0721 23:29:53.681251    4176 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0721 23:29:53.681251    4176 kubeadm.go:310] 
	I0721 23:29:53.681797    4176 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 893dpx.hrs96aj3ae8jouze \
	I0721 23:29:53.682259    4176 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b 
	I0721 23:29:53.682494    4176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0721 23:29:53.682494    4176 cni.go:84] Creating CNI manager for ""
	I0721 23:29:53.682622    4176 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 23:29:53.687013    4176 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0721 23:29:53.700054    4176 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0721 23:29:53.719952    4176 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0721 23:29:53.758432    4176 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0721 23:29:53.772881    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:53.773248    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-979300 minikube.k8s.io/updated_at=2024_07_21T23_29_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=addons-979300 minikube.k8s.io/primary=true
	I0721 23:29:53.782697    4176 ops.go:34] apiserver oom_adj: -16
	I0721 23:29:53.931392    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:54.431263    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:54.932707    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:55.432623    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:55.936480    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:56.435446    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:56.934364    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:57.439529    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:57.942870    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:58.431055    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:58.933546    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:59.438087    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:29:59.939280    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:00.441156    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:00.934418    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:01.431441    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:01.937264    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:02.436415    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:02.942144    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:03.444195    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:03.933248    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:04.438160    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:04.939068    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:05.444626    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:05.932655    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:06.440196    4176 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0721 23:30:06.569367    4176 kubeadm.go:1113] duration metric: took 12.8106562s to wait for elevateKubeSystemPrivileges
	I0721 23:30:06.569452    4176 kubeadm.go:394] duration metric: took 25.8807171s to StartCluster
	I0721 23:30:06.569452    4176 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:30:06.569783    4176 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0721 23:30:06.570485    4176 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 23:30:06.571949    4176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0721 23:30:06.571949    4176 start.go:235] Will wait 6m0s for node &{Name: IP:172.28.202.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 23:30:06.571949    4176 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0721 23:30:06.572674    4176 addons.go:69] Setting yakd=true in profile "addons-979300"
	I0721 23:30:06.572753    4176 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-979300"
	I0721 23:30:06.572828    4176 addons.go:234] Setting addon yakd=true in "addons-979300"
	I0721 23:30:06.572903    4176 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-979300"
	I0721 23:30:06.572903    4176 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-979300"
	I0721 23:30:06.572903    4176 addons.go:69] Setting metrics-server=true in profile "addons-979300"
	I0721 23:30:06.572993    4176 addons.go:234] Setting addon metrics-server=true in "addons-979300"
	I0721 23:30:06.573078    4176 config.go:182] Loaded profile config "addons-979300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:30:06.573157    4176 addons.go:69] Setting ingress=true in profile "addons-979300"
	I0721 23:30:06.573157    4176 addons.go:69] Setting gcp-auth=true in profile "addons-979300"
	I0721 23:30:06.573211    4176 addons.go:69] Setting inspektor-gadget=true in profile "addons-979300"
	I0721 23:30:06.573211    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.572867    4176 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-979300"
	I0721 23:30:06.573260    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.573260    4176 addons.go:69] Setting volumesnapshots=true in profile "addons-979300"
	I0721 23:30:06.573322    4176 addons.go:234] Setting addon volumesnapshots=true in "addons-979300"
	I0721 23:30:06.573124    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.573211    4176 addons.go:234] Setting addon inspektor-gadget=true in "addons-979300"
	I0721 23:30:06.573260    4176 mustload.go:65] Loading cluster: addons-979300
	I0721 23:30:06.573677    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.573157    4176 addons.go:234] Setting addon ingress=true in "addons-979300"
	I0721 23:30:06.573962    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.572711    4176 addons.go:69] Setting cloud-spanner=true in profile "addons-979300"
	I0721 23:30:06.574283    4176 addons.go:234] Setting addon cloud-spanner=true in "addons-979300"
	I0721 23:30:06.574441    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.574441    4176 config.go:182] Loaded profile config "addons-979300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:30:06.573124    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.573124    4176 addons.go:69] Setting registry=true in profile "addons-979300"
	I0721 23:30:06.575203    4176 addons.go:234] Setting addon registry=true in "addons-979300"
	I0721 23:30:06.573157    4176 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-979300"
	I0721 23:30:06.575373    4176 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-979300"
	I0721 23:30:06.575480    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.572674    4176 addons.go:69] Setting default-storageclass=true in profile "addons-979300"
	I0721 23:30:06.573157    4176 addons.go:69] Setting helm-tiller=true in profile "addons-979300"
	I0721 23:30:06.573157    4176 addons.go:69] Setting volcano=true in profile "addons-979300"
	I0721 23:30:06.573157    4176 addons.go:69] Setting ingress-dns=true in profile "addons-979300"
	I0721 23:30:06.573536    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.572867    4176 addons.go:69] Setting storage-provisioner=true in profile "addons-979300"
	I0721 23:30:06.575480    4176 addons.go:234] Setting addon storage-provisioner=true in "addons-979300"
	I0721 23:30:06.575696    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.575891    4176 addons.go:234] Setting addon volcano=true in "addons-979300"
	I0721 23:30:06.575992    4176 addons.go:234] Setting addon ingress-dns=true in "addons-979300"
	I0721 23:30:06.576089    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.576181    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.575891    4176 out.go:177] * Verifying Kubernetes components...
	I0721 23:30:06.575480    4176 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-979300"
	I0721 23:30:06.575992    4176 addons.go:234] Setting addon helm-tiller=true in "addons-979300"
	I0721 23:30:06.576910    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:06.576996    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.577276    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.581970    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.581970    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.581970    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.591486    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.591813    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.592351    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.593043    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.595646    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.603055    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.608838    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.609589    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.614281    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.615693    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.618728    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:06.620950    4176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:30:08.108064    4176 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.5360954s)
	I0721 23:30:08.109066    4176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0721 23:30:08.109066    4176 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.4880971s)
	I0721 23:30:08.130698    4176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0721 23:30:11.934591    4176 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.8254767s)
	I0721 23:30:11.934591    4176 start.go:971] {"host.minikube.internal": 172.28.192.1} host record injected into CoreDNS's ConfigMap
	I0721 23:30:11.944702    4176 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.8139555s)
	I0721 23:30:11.947593    4176 node_ready.go:35] waiting up to 6m0s for node "addons-979300" to be "Ready" ...
	I0721 23:30:12.300376    4176 node_ready.go:49] node "addons-979300" has status "Ready":"True"
	I0721 23:30:12.300376    4176 node_ready.go:38] duration metric: took 352.7779ms for node "addons-979300" to be "Ready" ...
	I0721 23:30:12.300376    4176 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:30:12.459425    4176 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g7hmt" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:12.651110    4176 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-979300" context rescaled to 1 replicas
	I0721 23:30:13.551386    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:13.551386    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:13.554387    4176 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0721 23:30:13.556386    4176 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0721 23:30:13.556386    4176 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0721 23:30:13.556386    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:13.624373    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:13.624373    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:13.627379    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:13.627379    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:13.630394    4176 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-979300"
	I0721 23:30:13.630394    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:13.632393    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:13.643254    4176 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0721 23:30:13.659809    4176 out.go:177]   - Using image docker.io/registry:2.8.3
	I0721 23:30:13.666808    4176 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0721 23:30:13.666808    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0721 23:30:13.666808    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:13.871404    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:13.871404    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:13.878400    4176 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0721 23:30:13.882392    4176 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0721 23:30:13.885396    4176 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0721 23:30:13.893408    4176 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0721 23:30:13.893408    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0721 23:30:13.893408    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:13.897449    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:13.897449    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:13.906358    4176 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0721 23:30:13.909369    4176 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0721 23:30:13.909369    4176 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0721 23:30:13.909369    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:13.999435    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:13.999435    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:14.016438    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:14.016438    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:14.043271    4176 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0721 23:30:14.084126    4176 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0721 23:30:14.103641    4176 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0721 23:30:14.111408    4176 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0721 23:30:14.117200    4176 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0721 23:30:14.121716    4176 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0721 23:30:14.131333    4176 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0721 23:30:14.138324    4176 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0721 23:30:14.156236    4176 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0721 23:30:14.156515    4176 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0721 23:30:14.159139    4176 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0721 23:30:14.159139    4176 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0721 23:30:14.160134    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:14.208687    4176 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0721 23:30:14.222724    4176 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0721 23:30:14.222724    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0721 23:30:14.223780    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:14.231063    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:14.231063    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:14.243951    4176 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0721 23:30:14.246777    4176 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0721 23:30:14.246777    4176 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0721 23:30:14.246777    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:14.254528    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:14.254636    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:14.254636    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:14.295448    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:14.295448    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:14.298880    4176 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0721 23:30:14.301109    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:14.301109    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:14.303335    4176 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0721 23:30:14.303403    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0721 23:30:14.306689    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:14.305128    4176 addons.go:234] Setting addon default-storageclass=true in "addons-979300"
	I0721 23:30:14.307253    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:14.311035    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:14.424455    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:14.424455    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:14.435462    4176 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0721 23:30:14.439765    4176 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0721 23:30:14.440457    4176 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0721 23:30:14.440584    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:14.742264    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:14.742264    4176 pod_ready.go:102] pod "coredns-7db6d8ff4d-g7hmt" in "kube-system" namespace has status "Ready":"False"
	I0721 23:30:14.742264    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:14.747262    4176 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0721 23:30:14.754444    4176 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0721 23:30:14.754444    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0721 23:30:14.754444    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:14.727579    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:14.858091    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:14.898594    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:14.898594    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:14.918528    4176 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0721 23:30:14.922623    4176 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0721 23:30:14.922623    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0721 23:30:14.922623    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:14.899502    4176 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0721 23:30:15.396783    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:15.397787    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:15.412800    4176 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0721 23:30:15.418785    4176 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0721 23:30:15.418785    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0721 23:30:15.418785    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:18.428041    4176 pod_ready.go:102] pod "coredns-7db6d8ff4d-g7hmt" in "kube-system" namespace has status "Ready":"False"
	I0721 23:30:18.990265    4176 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 23:30:18.990265    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0721 23:30:18.990265    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:20.007257    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:20.007464    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:20.012533    4176 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0721 23:30:20.019015    4176 out.go:177]   - Using image docker.io/busybox:stable
	I0721 23:30:20.021827    4176 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0721 23:30:20.021827    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0721 23:30:20.021827    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:20.220235    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:20.221140    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:20.221140    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:20.536679    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:20.536679    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:20.536679    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:20.596249    4176 pod_ready.go:102] pod "coredns-7db6d8ff4d-g7hmt" in "kube-system" namespace has status "Ready":"False"
	I0721 23:30:20.628703    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:20.628774    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:20.628774    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:20.631049    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:20.631049    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:20.631049    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:21.088929    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:21.088929    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:21.088929    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:21.132889    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:21.132970    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:21.133071    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:21.167852    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:21.168863    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:21.168863    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:21.189868    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:21.189868    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:21.189868    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:21.191849    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:21.191849    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:21.191849    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:21.550188    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:21.550188    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:21.550188    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:21.865465    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:21.865465    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:21.865465    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:22.077395    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:22.077395    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:22.077395    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:22.091749    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:22.091749    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:22.091749    4176 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0721 23:30:22.091749    4176 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0721 23:30:22.091749    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:22.600818    4176 pod_ready.go:102] pod "coredns-7db6d8ff4d-g7hmt" in "kube-system" namespace has status "Ready":"False"
	I0721 23:30:23.979888    4176 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0721 23:30:23.979888    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:24.262399    4176 pod_ready.go:92] pod "coredns-7db6d8ff4d-g7hmt" in "kube-system" namespace has status "Ready":"True"
	I0721 23:30:24.262399    4176 pod_ready.go:81] duration metric: took 11.802826s for pod "coredns-7db6d8ff4d-g7hmt" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.262399    4176 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-plrcz" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.310368    4176 pod_ready.go:92] pod "coredns-7db6d8ff4d-plrcz" in "kube-system" namespace has status "Ready":"True"
	I0721 23:30:24.310368    4176 pod_ready.go:81] duration metric: took 47.9678ms for pod "coredns-7db6d8ff4d-plrcz" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.310368    4176 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-979300" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.342339    4176 pod_ready.go:92] pod "etcd-addons-979300" in "kube-system" namespace has status "Ready":"True"
	I0721 23:30:24.342339    4176 pod_ready.go:81] duration metric: took 31.9713ms for pod "etcd-addons-979300" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.342339    4176 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-979300" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.742807    4176 pod_ready.go:92] pod "kube-apiserver-addons-979300" in "kube-system" namespace has status "Ready":"True"
	I0721 23:30:24.742807    4176 pod_ready.go:81] duration metric: took 400.4625ms for pod "kube-apiserver-addons-979300" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.742807    4176 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-979300" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.793690    4176 pod_ready.go:92] pod "kube-controller-manager-addons-979300" in "kube-system" namespace has status "Ready":"True"
	I0721 23:30:24.793690    4176 pod_ready.go:81] duration metric: took 50.8825ms for pod "kube-controller-manager-addons-979300" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.793690    4176 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j7wv2" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.828888    4176 pod_ready.go:92] pod "kube-proxy-j7wv2" in "kube-system" namespace has status "Ready":"True"
	I0721 23:30:24.828888    4176 pod_ready.go:81] duration metric: took 35.1971ms for pod "kube-proxy-j7wv2" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.828888    4176 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-979300" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.855252    4176 pod_ready.go:92] pod "kube-scheduler-addons-979300" in "kube-system" namespace has status "Ready":"True"
	I0721 23:30:24.855252    4176 pod_ready.go:81] duration metric: took 26.3643ms for pod "kube-scheduler-addons-979300" in "kube-system" namespace to be "Ready" ...
	I0721 23:30:24.855252    4176 pod_ready.go:38] duration metric: took 12.5547185s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0721 23:30:24.855252    4176 api_server.go:52] waiting for apiserver process to appear ...
	I0721 23:30:24.876239    4176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 23:30:24.977945    4176 api_server.go:72] duration metric: took 18.4052297s to wait for apiserver process to appear ...
	I0721 23:30:24.977945    4176 api_server.go:88] waiting for apiserver healthz status ...
	I0721 23:30:24.977945    4176 api_server.go:253] Checking apiserver healthz at https://172.28.202.6:8443/healthz ...
	I0721 23:30:25.004107    4176 api_server.go:279] https://172.28.202.6:8443/healthz returned 200:
	ok
	I0721 23:30:25.008358    4176 api_server.go:141] control plane version: v1.30.3
	I0721 23:30:25.008899    4176 api_server.go:131] duration metric: took 30.9537ms to wait for apiserver health ...
	I0721 23:30:25.008899    4176 system_pods.go:43] waiting for kube-system pods to appear ...
	I0721 23:30:25.039744    4176 system_pods.go:59] 7 kube-system pods found
	I0721 23:30:25.039744    4176 system_pods.go:61] "coredns-7db6d8ff4d-g7hmt" [54bb198a-ff86-4927-8a9c-0e8aaf4ab5d3] Running
	I0721 23:30:25.039744    4176 system_pods.go:61] "coredns-7db6d8ff4d-plrcz" [a10d3b70-1b43-4d70-a13b-df5ec44335d9] Running
	I0721 23:30:25.039744    4176 system_pods.go:61] "etcd-addons-979300" [99001f7f-f58a-4720-b1a6-c949218011c1] Running
	I0721 23:30:25.039744    4176 system_pods.go:61] "kube-apiserver-addons-979300" [c1a24587-b8bb-4f03-9473-7dd0cfbc45e4] Running
	I0721 23:30:25.039744    4176 system_pods.go:61] "kube-controller-manager-addons-979300" [ceb66b81-f364-4e31-8b08-8dcbec8a6c26] Running
	I0721 23:30:25.039744    4176 system_pods.go:61] "kube-proxy-j7wv2" [74a4a9c9-8720-425e-8aa5-6a1d408d70b2] Running
	I0721 23:30:25.039744    4176 system_pods.go:61] "kube-scheduler-addons-979300" [e038d760-14b8-4f95-a246-39953d95982f] Running
	I0721 23:30:25.039744    4176 system_pods.go:74] duration metric: took 30.8447ms to wait for pod list to return data ...
	I0721 23:30:25.039744    4176 default_sa.go:34] waiting for default service account to be created ...
	I0721 23:30:25.091443    4176 default_sa.go:45] found service account: "default"
	I0721 23:30:25.091443    4176 default_sa.go:55] duration metric: took 51.6988ms for default service account to be created ...
	I0721 23:30:25.091443    4176 system_pods.go:116] waiting for k8s-apps to be running ...
	I0721 23:30:25.293034    4176 system_pods.go:86] 7 kube-system pods found
	I0721 23:30:25.293034    4176 system_pods.go:89] "coredns-7db6d8ff4d-g7hmt" [54bb198a-ff86-4927-8a9c-0e8aaf4ab5d3] Running
	I0721 23:30:25.293034    4176 system_pods.go:89] "coredns-7db6d8ff4d-plrcz" [a10d3b70-1b43-4d70-a13b-df5ec44335d9] Running
	I0721 23:30:25.293034    4176 system_pods.go:89] "etcd-addons-979300" [99001f7f-f58a-4720-b1a6-c949218011c1] Running
	I0721 23:30:25.293034    4176 system_pods.go:89] "kube-apiserver-addons-979300" [c1a24587-b8bb-4f03-9473-7dd0cfbc45e4] Running
	I0721 23:30:25.293034    4176 system_pods.go:89] "kube-controller-manager-addons-979300" [ceb66b81-f364-4e31-8b08-8dcbec8a6c26] Running
	I0721 23:30:25.293034    4176 system_pods.go:89] "kube-proxy-j7wv2" [74a4a9c9-8720-425e-8aa5-6a1d408d70b2] Running
	I0721 23:30:25.293034    4176 system_pods.go:89] "kube-scheduler-addons-979300" [e038d760-14b8-4f95-a246-39953d95982f] Running
	I0721 23:30:25.293034    4176 system_pods.go:126] duration metric: took 201.588ms to wait for k8s-apps to be running ...
	I0721 23:30:25.293034    4176 system_svc.go:44] waiting for kubelet service to be running ....
	I0721 23:30:25.315033    4176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 23:30:25.464440    4176 system_svc.go:56] duration metric: took 171.4036ms WaitForService to wait for kubelet
	I0721 23:30:25.464440    4176 kubeadm.go:582] duration metric: took 18.8917185s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 23:30:25.464440    4176 node_conditions.go:102] verifying NodePressure condition ...
	I0721 23:30:25.509608    4176 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0721 23:30:25.509608    4176 node_conditions.go:123] node cpu capacity is 2
	I0721 23:30:25.510089    4176 node_conditions.go:105] duration metric: took 45.649ms to run NodePressure ...
	I0721 23:30:25.510089    4176 start.go:241] waiting for startup goroutines ...
	I0721 23:30:25.637834    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:25.637834    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:25.637834    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:26.258171    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:26.258171    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:26.258171    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:27.193501    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:27.193501    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:27.206502    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:27.682410    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0721 23:30:28.159368    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:28.159368    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:28.182590    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:28.727127    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:28.727127    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:28.729134    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:28.747856    4176 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0721 23:30:28.747856    4176 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0721 23:30:28.854985    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:28.854985    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:28.854985    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:28.924683    4176 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0721 23:30:28.924881    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0721 23:30:28.972953    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:28.972953    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:28.973953    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:29.065929    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:29.066484    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:29.067121    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:29.140892    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0721 23:30:29.162930    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:29.163034    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:29.167980    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:29.285662    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:29.285740    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:29.286631    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:29.368385    4176 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0721 23:30:29.368505    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0721 23:30:29.434483    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:29.434483    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:29.435241    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:29.531017    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:29.531096    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:29.531953    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:29.545740    4176 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0721 23:30:29.545951    4176 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0721 23:30:29.563207    4176 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0721 23:30:29.563207    4176 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0721 23:30:29.573214    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0721 23:30:29.623842    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:29.623842    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:29.624606    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:29.665379    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:29.665379    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:29.665379    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:29.668546    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:29.668546    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:29.668808    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:29.698776    4176 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0721 23:30:29.698977    4176 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0721 23:30:29.728635    4176 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0721 23:30:29.728635    4176 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0721 23:30:29.811137    4176 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0721 23:30:29.811224    4176 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0721 23:30:29.815007    4176 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0721 23:30:29.815007    4176 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0721 23:30:29.904917    4176 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0721 23:30:29.904917    4176 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0721 23:30:29.985512    4176 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0721 23:30:29.985512    4176 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0721 23:30:30.014483    4176 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0721 23:30:30.014483    4176 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0721 23:30:30.047130    4176 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0721 23:30:30.047221    4176 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0721 23:30:30.066697    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0721 23:30:30.111049    4176 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0721 23:30:30.111049    4176 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0721 23:30:30.127587    4176 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0721 23:30:30.127587    4176 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0721 23:30:30.241974    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:30.241974    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:30.242973    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:30.258907    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0721 23:30:30.381226    4176 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0721 23:30:30.381295    4176 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0721 23:30:30.470499    4176 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0721 23:30:30.470569    4176 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0721 23:30:30.478942    4176 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0721 23:30:30.478942    4176 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0721 23:30:30.572161    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0721 23:30:30.575086    4176 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0721 23:30:30.575086    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0721 23:30:30.605858    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0721 23:30:30.681819    4176 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0721 23:30:30.681819    4176 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0721 23:30:30.801925    4176 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0721 23:30:30.802098    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0721 23:30:30.804115    4176 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0721 23:30:30.804115    4176 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0721 23:30:30.822638    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:30.822638    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:30.823463    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:30.840108    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0721 23:30:30.852107    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:30.852107    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:30.852107    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:30.908234    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0721 23:30:30.953044    4176 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0721 23:30:30.953274    4176 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0721 23:30:31.015299    4176 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0721 23:30:31.015299    4176 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0721 23:30:31.135449    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0721 23:30:31.206186    4176 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0721 23:30:31.206186    4176 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0721 23:30:31.320473    4176 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0721 23:30:31.320473    4176 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0721 23:30:31.486047    4176 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0721 23:30:31.486111    4176 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0721 23:30:31.575696    4176 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0721 23:30:31.575755    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0721 23:30:31.633295    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0721 23:30:31.745981    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0721 23:30:31.914117    4176 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0721 23:30:31.914117    4176 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0721 23:30:31.931149    4176 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0721 23:30:31.931184    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0721 23:30:32.248738    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0721 23:30:32.316707    4176 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0721 23:30:32.316786    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0721 23:30:32.795152    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:32.795880    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:32.796501    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:32.877524    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:32.877761    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:32.878367    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:32.894524    4176 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0721 23:30:32.895525    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0721 23:30:33.291623    4176 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0721 23:30:33.291690    4176 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0721 23:30:33.610781    4176 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0721 23:30:33.875817    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0721 23:30:33.884807    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0721 23:30:34.418684    4176 addons.go:234] Setting addon gcp-auth=true in "addons-979300"
	I0721 23:30:34.418912    4176 host.go:66] Checking if "addons-979300" exists ...
	I0721 23:30:34.420464    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:36.730722    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:36.730722    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:36.745522    4176 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0721 23:30:36.745569    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-979300 ).state
	I0721 23:30:38.226646    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.5441035s)
	I0721 23:30:38.226646    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.0856391s)
	I0721 23:30:38.226646    4176 addons.go:475] Verifying addon ingress=true in "addons-979300"
	I0721 23:30:38.226646    4176 addons.go:475] Verifying addon registry=true in "addons-979300"
	I0721 23:30:38.245070    4176 out.go:177] * Verifying ingress addon...
	I0721 23:30:38.247990    4176 out.go:177] * Verifying registry addon...
	I0721 23:30:38.269010    4176 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0721 23:30:38.269438    4176 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0721 23:30:38.287480    4176 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0721 23:30:38.287480    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:38.292392    4176 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0721 23:30:38.292475    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:38.807225    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:38.815266    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:39.316917    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:39.323849    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:39.499144    4176 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:30:39.499413    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:39.499527    4176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-979300 ).networkadapters[0]).ipaddresses[0]
	I0721 23:30:39.841591    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:39.851681    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:40.346353    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:40.401037    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:40.973235    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:40.973235    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:41.295767    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:41.319806    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:41.789811    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:41.794711    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:42.285454    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:42.285454    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:42.526012    4176 main.go:141] libmachine: [stdout =====>] : 172.28.202.6
	
	I0721 23:30:42.526012    4176 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:30:42.526799    4176 sshutil.go:53] new ssh client: &{IP:172.28.202.6 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-979300\id_rsa Username:docker}
	I0721 23:30:42.782828    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:42.783731    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:43.342989    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:43.342989    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:43.803522    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:43.811660    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:44.317038    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:44.317144    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:44.797811    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:44.806823    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:45.297008    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:45.298407    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:45.793788    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (16.2203323s)
	I0721 23:30:45.793906    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (15.7270113s)
	I0721 23:30:45.794043    4176 addons.go:475] Verifying addon metrics-server=true in "addons-979300"
	I0721 23:30:45.794099    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (15.5349412s)
	I0721 23:30:45.794172    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (15.2217883s)
	I0721 23:30:45.794282    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (15.1882328s)
	I0721 23:30:45.794338    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (14.954042s)
	I0721 23:30:45.794391    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (14.8859707s)
	W0721 23:30:45.794391    4176 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0721 23:30:45.794391    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (14.6587586s)
	I0721 23:30:45.794391    4176 retry.go:31] will retry after 309.883876ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0721 23:30:45.794391    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.1609183s)
	I0721 23:30:45.794391    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (14.0482341s)
	I0721 23:30:45.794391    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (13.5444786s)
	I0721 23:30:45.795065    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.9190987s)
	I0721 23:30:45.799870    4176 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-979300 service yakd-dashboard -n yakd-dashboard
	
	I0721 23:30:45.867118    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:45.868011    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0721 23:30:45.874174    4176 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0721 23:30:46.133370    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0721 23:30:46.316907    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:46.321840    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:46.832194    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:46.852069    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:47.002029    4176 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.2562669s)
	I0721 23:30:47.007465    4176 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0721 23:30:47.009148    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (13.1241767s)
	I0721 23:30:47.009148    4176 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-979300"
	I0721 23:30:47.017513    4176 out.go:177] * Verifying csi-hostpath-driver addon...
	I0721 23:30:47.018442    4176 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0721 23:30:47.022898    4176 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0721 23:30:47.023943    4176 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0721 23:30:47.023943    4176 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0721 23:30:47.067134    4176 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0721 23:30:47.067134    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:47.215997    4176 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0721 23:30:47.218527    4176 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0721 23:30:47.261548    4176 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0721 23:30:47.261548    4176 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0721 23:30:47.314584    4176 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0721 23:30:47.491555    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:47.494430    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:47.544737    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:47.788880    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:47.790879    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:48.041391    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:48.293913    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:48.293913    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:48.544563    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:48.790994    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.6575905s)
	I0721 23:30:48.801264    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:48.801264    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:48.922867    4176 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.60817s)
	I0721 23:30:48.935541    4176 addons.go:475] Verifying addon gcp-auth=true in "addons-979300"
	I0721 23:30:48.940968    4176 out.go:177] * Verifying gcp-auth addon...
	I0721 23:30:48.956896    4176 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0721 23:30:48.966790    4176 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0721 23:30:49.039976    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:49.292913    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:49.293111    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:49.546586    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:49.781671    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:49.784088    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:50.035801    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:50.297625    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:50.297826    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:50.546094    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:50.781760    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:50.783432    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:51.043305    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:51.299055    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:51.299634    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:51.531843    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:51.780051    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:51.781387    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:52.030643    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:52.284404    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:52.284665    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:52.540032    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:52.776336    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:52.780839    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:53.039334    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:53.280269    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:53.280614    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:53.550351    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:53.780428    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:53.787038    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:54.050033    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:54.282066    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:54.282666    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:54.541089    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:54.779003    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:54.780527    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:55.036514    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:55.287286    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:55.287855    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:55.543792    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:55.789737    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:55.791132    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:56.194404    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:56.286057    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:56.290112    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:56.553312    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:56.786339    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:56.787164    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:57.307844    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:57.307844    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:57.310716    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:57.649326    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:57.824185    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:57.824590    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:58.047140    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:58.282969    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:58.283187    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:58.534441    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:58.791554    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:58.792292    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:59.039943    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:59.278040    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:30:59.278040    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:59.541770    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:30:59.862743    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:30:59.864532    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:00.046012    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:01.059340    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:01.065504    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:01.068758    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:01.098324    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:01.100788    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:01.104243    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:01.288473    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:01.290265    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:01.534766    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:01.786859    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:01.788945    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:02.048468    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:02.290119    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:02.297615    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:02.540127    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:02.794970    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:02.797305    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:03.043111    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:03.284121    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:03.287392    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:03.548443    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:03.802701    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:03.802701    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:04.035139    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:04.289217    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:04.290042    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:04.537332    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:04.796480    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:04.798156    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:05.038707    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:05.282105    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:05.282673    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:05.543602    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:05.794461    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:05.795770    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:06.030746    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:06.294653    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:06.304694    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:06.545972    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:06.784559    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:06.786227    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:07.052127    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:07.292626    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:07.293989    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:07.551490    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:07.781062    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:07.781736    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:08.041332    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:08.280272    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:08.283458    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:08.543324    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:08.777453    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:08.780386    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:09.046743    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:09.290757    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:09.291857    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:09.539168    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:09.782478    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:09.782478    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:10.049912    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:10.282618    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:10.285099    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:10.538269    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:10.792239    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:10.793764    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:11.044038    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:11.282169    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:11.286187    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:11.533072    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:11.784718    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:11.786061    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:12.043652    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:12.277651    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:12.280802    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:12.549383    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:12.781990    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:12.782049    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:13.051939    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:13.276164    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:13.276535    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:13.546315    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:13.793384    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:13.793384    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:14.036292    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:14.288313    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:14.289492    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:14.533727    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:14.796517    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:14.799593    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:15.034780    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:15.283354    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:15.283354    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:15.542701    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:15.788621    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:15.788784    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:16.054633    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:16.294422    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:16.296235    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:16.539315    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:16.796595    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:16.796801    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:17.046502    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:17.293257    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:17.294913    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:17.544144    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:17.793924    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:17.794039    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:18.042326    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:18.283814    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:18.284578    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:18.535486    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:18.795290    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:18.797532    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:19.056266    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:19.290355    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:19.291138    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:19.551575    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:19.783039    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:19.785118    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:20.207257    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:20.605304    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:20.606844    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:20.606974    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:20.814823    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:20.817403    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:21.036138    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:21.291591    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:21.292142    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:21.531737    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:21.971743    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:21.976814    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:22.056256    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:22.285823    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:22.286388    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:22.536186    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:22.780880    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:22.781542    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:23.038300    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:23.288266    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:23.288266    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:23.532883    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:23.791121    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:23.791121    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:24.031258    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:24.295382    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:24.298037    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:24.535821    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:24.789881    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:24.790467    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:25.059599    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:25.285047    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:25.285508    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:25.535833    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:25.793965    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:25.793965    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:26.042137    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:26.297388    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:26.299862    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:26.539100    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:26.794670    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:26.796744    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:27.053662    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:27.283914    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:27.286454    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:27.538419    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:27.785248    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:27.786208    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:28.039716    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:28.279208    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:28.280081    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:28.542545    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:28.799287    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:28.802377    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:29.048979    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:29.278750    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:29.280532    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:29.534651    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:29.777700    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:29.778045    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:30.049775    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:30.278273    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:30.278273    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:30.545811    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:30.780853    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:30.781219    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:31.034253    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:31.311131    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:31.314854    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:31.534227    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:31.802006    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:31.807347    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:32.051079    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:32.293441    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:32.295871    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:32.562770    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:32.788598    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:32.788598    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:33.061701    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:33.296800    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:33.297432    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:33.541753    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:33.807969    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:33.810827    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:34.048632    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:34.286802    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:34.286802    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:34.537168    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:34.784076    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:34.784238    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:35.034223    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:35.280287    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:35.282180    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:35.541150    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:35.781238    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:35.781238    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:36.049676    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:36.301774    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:36.304048    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:36.553270    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:36.786980    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:36.786980    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:37.032288    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:37.280686    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:37.280686    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:37.543250    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:37.778238    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:37.778601    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:38.039093    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:38.299558    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:38.301573    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:38.540195    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:38.849791    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:38.850576    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:39.032941    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:39.284144    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:39.284432    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:39.544314    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:39.791926    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:39.792020    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:40.035927    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:40.298366    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:40.300212    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:40.556681    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:40.836858    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:40.838162    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:41.059536    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:41.293417    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:41.296195    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:41.541731    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:41.789092    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:41.789937    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:42.040764    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:42.296642    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:42.296642    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:42.536731    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:42.791863    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:42.792229    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:43.038833    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:43.284169    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:43.286974    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:43.544789    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:43.790983    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:43.790983    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:44.046681    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:44.270744    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:44.270744    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:44.536627    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:44.784885    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:44.784885    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:45.046406    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:45.289611    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:45.297408    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:45.530671    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:45.783446    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:45.783446    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:46.027268    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:46.275359    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:46.279248    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:46.526029    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:46.775043    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:46.775043    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:47.038325    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:47.296999    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:47.299148    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:47.538641    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:47.777088    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:47.782531    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:48.053025    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:48.285643    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:48.292031    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:48.528966    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:48.778193    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:48.785326    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:49.043642    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:49.281889    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:49.282805    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:49.532621    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:49.796005    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:49.797665    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:50.040450    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:50.290161    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:50.290843    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:51.146542    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:51.146703    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:51.148252    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:51.156276    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:51.363871    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:51.364105    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:51.743052    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:51.789001    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:51.789975    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:52.050190    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:52.280553    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:52.281373    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0721 23:31:52.544762    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:52.849359    4176 kapi.go:107] duration metric: took 1m14.5795051s to wait for kubernetes.io/minikube-addons=registry ...
	I0721 23:31:52.849359    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:53.036972    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:53.281830    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:53.532236    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:53.777125    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:54.047862    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:54.280570    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:54.551922    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:54.781078    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:55.039934    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:55.281532    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:55.532337    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:56.009728    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:56.100397    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:56.358841    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:56.543842    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:56.778387    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:57.037046    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:57.295914    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:57.537494    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:57.785165    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:58.041716    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:58.276858    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:58.540881    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:58.784444    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:59.036592    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:59.300793    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:31:59.549190    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:31:59.782088    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:00.046311    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:00.286571    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:00.543641    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:00.961669    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:01.041735    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:01.296760    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:01.539653    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:01.788715    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:02.046020    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:02.278547    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:02.614235    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:02.784063    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:03.049494    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:03.286773    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:03.536837    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:03.816760    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:04.038301    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:04.290593    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:04.548693    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:04.788290    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:05.051101    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:05.283822    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:05.538578    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:05.799009    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:06.038702    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:06.291171    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:06.546803    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:06.784575    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:07.045650    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:07.283107    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:07.549930    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:07.786645    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:08.035559    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:08.295738    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:08.550302    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:08.793107    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:09.040213    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:09.293607    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:09.558241    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:09.780363    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:10.608312    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:10.613515    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:10.619656    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:10.781438    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:11.052268    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:11.288321    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:11.532390    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:11.798986    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:12.038265    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:12.293785    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:12.541516    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:12.782997    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:13.050087    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:13.281093    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:13.545248    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:13.946982    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:14.064694    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:14.287422    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:14.536999    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:14.792793    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:15.049253    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:15.290783    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:15.532021    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:15.788403    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:16.043929    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:16.281759    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:16.540687    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:16.778431    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:17.047135    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:17.286722    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:17.544086    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:17.783628    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:18.046678    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:18.296520    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:18.534329    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:18.794254    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:19.047195    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:19.283111    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:19.544795    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:19.790662    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:20.037254    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:20.278520    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:20.552470    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:20.783241    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:21.544514    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:21.545685    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:21.555803    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:21.793849    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:22.242004    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:22.279174    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:22.537071    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:22.782439    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:23.044458    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:23.290198    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:23.537756    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:23.780734    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:24.070022    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:24.296060    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:24.541417    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:24.796791    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:25.038066    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:25.292415    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:25.544679    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:26.095305    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:26.102767    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:26.286921    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:26.545006    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:26.788846    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:27.046482    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:27.417499    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:27.629577    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:27.783156    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:28.044435    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:28.288681    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:28.537489    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:28.791952    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:29.043504    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:29.280778    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:29.539902    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:29.782481    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:30.043407    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:30.287852    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:30.541180    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:30.777388    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:31.046395    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:31.292792    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:31.535242    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:31.784984    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:32.241427    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:32.306100    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:32.649716    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:32.787868    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:33.039266    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:33.300457    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:33.543654    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:33.791503    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:34.037827    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:34.297026    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:34.709737    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:34.781307    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:35.036731    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:35.280586    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:35.545662    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:35.786647    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:36.035793    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:36.806519    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:36.809329    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:36.975638    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:37.034794    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:37.282535    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:37.542981    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:37.832129    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:38.092778    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:38.278089    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:38.534477    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:38.795070    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:39.048432    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:39.279225    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:39.548220    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:40.107549    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:40.108370    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:40.284377    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:40.537681    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:40.787860    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:41.044752    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:41.282328    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:41.532969    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:41.794351    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:42.044537    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:42.279523    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:42.534990    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:42.789001    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:43.219531    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:43.349358    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:43.535832    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:43.787083    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:44.037586    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:44.289571    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:44.533277    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:44.780441    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:45.045343    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:45.283484    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:45.549332    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:45.781932    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:46.039752    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:46.285737    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:46.545613    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:46.793481    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:47.040079    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:47.284727    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:47.536715    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:47.793401    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:48.050195    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:48.287141    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:48.549861    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:48.793943    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:49.032556    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:49.307289    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:49.540083    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:49.783796    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:50.048019    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:50.293067    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:50.533277    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:50.799731    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:51.045669    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:51.288904    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:51.546998    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:51.797074    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:52.057947    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:52.303817    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:52.538935    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:52.786528    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:53.039145    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:53.282403    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:53.545638    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:53.794177    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:54.043423    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:54.283806    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:54.534948    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:54.796234    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:55.055295    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:55.290093    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:55.541149    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:55.800504    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:56.051147    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:56.290639    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:56.541817    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:56.859766    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:57.040801    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:57.281392    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:57.547939    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:57.796382    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:58.043171    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:58.285579    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:58.549373    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:58.783046    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:59.056375    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:59.294603    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:32:59.545113    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:32:59.779237    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:00.042590    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:00.286887    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:00.541049    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:00.789550    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:01.034339    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:01.296408    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:01.550783    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:01.781299    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:02.050622    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:02.298701    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:02.533677    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:02.791859    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:03.051334    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:03.283610    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:03.533874    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:03.780441    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:04.050863    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:04.280428    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:04.535615    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:04.791514    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:05.044020    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:05.295895    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:05.535575    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:05.912542    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:06.037450    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:06.292760    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:06.545874    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:06.783354    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:07.032980    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:07.289529    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:07.535541    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:07.780455    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:08.061359    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:08.284953    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:08.550247    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:08.777713    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:09.050431    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:09.289262    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:09.545760    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:09.788651    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:10.035312    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:10.286922    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:10.546797    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:10.781962    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:11.048842    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:11.302901    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:11.540095    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:11.843505    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:12.126181    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:12.494941    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:12.550926    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:12.805827    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:13.044390    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:13.301940    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:13.538056    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:13.799478    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:14.043612    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:14.291012    4176 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0721 23:33:14.538651    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:14.788590    4176 kapi.go:107] duration metric: took 2m36.5172083s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0721 23:33:15.046366    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:15.544533    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:16.053269    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:16.542367    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:17.041947    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:17.560416    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:18.057392    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:18.540176    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:19.036214    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:19.544266    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:20.103970    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:20.548880    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:21.042826    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:21.539973    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:22.057999    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:22.540980    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:23.048626    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:23.561169    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:24.037290    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:24.537604    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:25.049715    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:25.538154    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:26.053425    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:26.553609    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:27.051948    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:27.814789    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:28.052082    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:28.542320    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0721 23:33:29.045016    4176 kapi.go:107] duration metric: took 2m42.0199348s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0721 23:33:33.984933    4176 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0721 23:33:33.985009    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:34.472641    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:34.966022    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:35.474181    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:35.978672    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:36.478872    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:36.980074    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:37.465531    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:37.976888    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:38.476480    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:38.975159    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:39.467169    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:39.976656    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:40.479963    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:40.973331    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:41.472255    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:41.980057    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:42.466234    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:42.974481    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:43.474283    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:43.965853    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:44.464865    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:44.989108    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:45.478561    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:45.976542    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:46.477725    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:46.969386    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:47.470436    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:47.967780    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:48.470095    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:48.976887    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:49.464578    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:49.966364    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:50.465693    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:50.979732    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:51.478636    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:51.980626    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:52.474438    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:52.966625    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:53.471855    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:53.964697    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:54.491943    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:54.966189    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:55.483995    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:55.977124    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:56.479683    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:56.970418    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:57.476842    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:57.972946    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:58.466732    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:58.985108    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:59.468219    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:33:59.965398    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:00.471318    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:00.973441    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:01.479780    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:01.976265    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:02.469273    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:02.984899    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:03.474050    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:03.974986    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:04.474384    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:04.971789    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:05.466063    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:05.979659    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:06.479869    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:06.978216    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:07.469480    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:07.969627    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:08.471550    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:08.981984    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:09.549630    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:09.977086    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:10.469131    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:10.980841    4176 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0721 23:34:11.479452    4176 kapi.go:107] duration metric: took 3m22.5200218s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0721 23:34:11.484777    4176 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-979300 cluster.
	I0721 23:34:11.487429    4176 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0721 23:34:11.490031    4176 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0721 23:34:11.497376    4176 out.go:177] * Enabled addons: volcano, metrics-server, ingress-dns, nvidia-device-plugin, helm-tiller, cloud-spanner, storage-provisioner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0721 23:34:11.502182    4176 addons.go:510] duration metric: took 4m4.9271647s for enable addons: enabled=[volcano metrics-server ingress-dns nvidia-device-plugin helm-tiller cloud-spanner storage-provisioner inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0721 23:34:11.502543    4176 start.go:246] waiting for cluster config update ...
	I0721 23:34:11.502543    4176 start.go:255] writing updated cluster config ...
	I0721 23:34:11.515267    4176 ssh_runner.go:195] Run: rm -f paused
	I0721 23:34:11.768899    4176 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0721 23:34:11.776746    4176 out.go:177] * Done! kubectl is now configured to use "addons-979300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 21 23:34:59 addons-979300 dockerd[1430]: time="2024-07-21T23:34:59.638648572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:34:59 addons-979300 dockerd[1430]: time="2024-07-21T23:34:59.638784774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:34:59 addons-979300 cri-dockerd[1325]: time="2024-07-21T23:34:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/de79741a640a1df0e986bffdbb68508aac45dcccf54de7b6b7accebd61681697/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 21 23:35:01 addons-979300 cri-dockerd[1325]: time="2024-07-21T23:35:01Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Jul 21 23:35:01 addons-979300 dockerd[1430]: time="2024-07-21T23:35:01.757757237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:35:01 addons-979300 dockerd[1430]: time="2024-07-21T23:35:01.757897938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:35:01 addons-979300 dockerd[1430]: time="2024-07-21T23:35:01.757925738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:35:01 addons-979300 dockerd[1430]: time="2024-07-21T23:35:01.758180241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:35:01 addons-979300 dockerd[1424]: time="2024-07-21T23:35:01.915676397Z" level=info msg="ignoring event" container=95cee6c4bdebc3a098ab8d1688f97eb48207c4683158e0238a7ce5c17eb4f8ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:35:01 addons-979300 dockerd[1430]: time="2024-07-21T23:35:01.921715757Z" level=info msg="shim disconnected" id=95cee6c4bdebc3a098ab8d1688f97eb48207c4683158e0238a7ce5c17eb4f8ff namespace=moby
	Jul 21 23:35:01 addons-979300 dockerd[1430]: time="2024-07-21T23:35:01.921859458Z" level=warning msg="cleaning up after shim disconnected" id=95cee6c4bdebc3a098ab8d1688f97eb48207c4683158e0238a7ce5c17eb4f8ff namespace=moby
	Jul 21 23:35:01 addons-979300 dockerd[1430]: time="2024-07-21T23:35:01.921935959Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:35:03 addons-979300 dockerd[1430]: time="2024-07-21T23:35:03.338889850Z" level=info msg="shim disconnected" id=de79741a640a1df0e986bffdbb68508aac45dcccf54de7b6b7accebd61681697 namespace=moby
	Jul 21 23:35:03 addons-979300 dockerd[1430]: time="2024-07-21T23:35:03.339017551Z" level=warning msg="cleaning up after shim disconnected" id=de79741a640a1df0e986bffdbb68508aac45dcccf54de7b6b7accebd61681697 namespace=moby
	Jul 21 23:35:03 addons-979300 dockerd[1430]: time="2024-07-21T23:35:03.339038351Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:35:03 addons-979300 dockerd[1424]: time="2024-07-21T23:35:03.341321574Z" level=info msg="ignoring event" container=de79741a640a1df0e986bffdbb68508aac45dcccf54de7b6b7accebd61681697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:35:09 addons-979300 dockerd[1424]: time="2024-07-21T23:35:09.919814929Z" level=info msg="ignoring event" container=560a5ac3cf6ebff5d8dda8bb5cd429dbbd7989219fbd0eeef67fd5bd2ca813db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:35:09 addons-979300 dockerd[1430]: time="2024-07-21T23:35:09.921267044Z" level=info msg="shim disconnected" id=560a5ac3cf6ebff5d8dda8bb5cd429dbbd7989219fbd0eeef67fd5bd2ca813db namespace=moby
	Jul 21 23:35:09 addons-979300 dockerd[1430]: time="2024-07-21T23:35:09.925242087Z" level=warning msg="cleaning up after shim disconnected" id=560a5ac3cf6ebff5d8dda8bb5cd429dbbd7989219fbd0eeef67fd5bd2ca813db namespace=moby
	Jul 21 23:35:09 addons-979300 dockerd[1430]: time="2024-07-21T23:35:09.925451489Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:35:10 addons-979300 dockerd[1424]: time="2024-07-21T23:35:10.180066908Z" level=info msg="ignoring event" container=28d8c42b1fa34c734cd56a554f217b9495953485c03c6e38e93bce8acfee16c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:35:10 addons-979300 dockerd[1430]: time="2024-07-21T23:35:10.181560424Z" level=info msg="shim disconnected" id=28d8c42b1fa34c734cd56a554f217b9495953485c03c6e38e93bce8acfee16c4 namespace=moby
	Jul 21 23:35:10 addons-979300 dockerd[1430]: time="2024-07-21T23:35:10.182447533Z" level=warning msg="cleaning up after shim disconnected" id=28d8c42b1fa34c734cd56a554f217b9495953485c03c6e38e93bce8acfee16c4 namespace=moby
	Jul 21 23:35:10 addons-979300 dockerd[1430]: time="2024-07-21T23:35:10.182549635Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:35:10 addons-979300 dockerd[1430]: time="2024-07-21T23:35:10.207180498Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:35:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	95cee6c4bdebc       busybox@sha256:9ae97d36d26566ff84e8893c64a6dc4fe8ca6d1144bf5b87b2b85a32def253c7                                                              10 seconds ago       Exited              busybox                                  0                   de79741a640a1       test-local-path
	d7a6dce280bc9       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              15 seconds ago       Exited              helper-pod                               0                   b14492410818b       helper-pod-create-pvc-271e3385-5895-4e4b-bd9d-59b933322d79
	bf99e3de94bee       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734                            17 seconds ago       Exited              gadget                                   4                   cfa5ad171aaa7       gadget-cww5g
	b1231985ef603       nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df                                                                18 seconds ago       Running             nginx                                    0                   9eea9c8e29433       test-job-nginx-0
	f5e928ae624c0       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                                        33 seconds ago       Running             headlamp                                 0                   5f6f406f1da39       headlamp-7867546754-962bd
	3d79454f42484       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   0a9a3a88a49ca       gcp-auth-5db96cd9b4-dqzjj
	e054c2d2d8dd2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   3730dda5c3dd5       csi-hostpathplugin-5vg4f
	b7765c5a4d984       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   3730dda5c3dd5       csi-hostpathplugin-5vg4f
	742985e3fac44       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   3730dda5c3dd5       csi-hostpathplugin-5vg4f
	d66801d7f5b9f       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         About a minute ago   Running             admission                                0                   c92f1f5db4730       volcano-admission-5f7844f7bc-zjmsj
	1f08a66f26d8d       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   3730dda5c3dd5       csi-hostpathplugin-5vg4f
	5fdcdb7a3c554       registry.k8s.io/ingress-nginx/controller@sha256:e6439a12b52076965928e83b7b56aae6731231677b01e81818bce7fa5c60161a                             2 minutes ago        Running             controller                               0                   5c8a98e749907       ingress-nginx-controller-6d9bd977d4-v6pkq
	3d8d1af739d11       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                2 minutes ago        Running             node-driver-registrar                    0                   3730dda5c3dd5       csi-hostpathplugin-5vg4f
	b82729941d04f       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   621f1a5f11f27       csi-hostpath-attacher-0
	73e68fc83c5b5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   3730dda5c3dd5       csi-hostpathplugin-5vg4f
	42094f64225a5       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   9ce3261c1a92c       csi-hostpath-resizer-0
	4f4aed0b8c76c       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      2 minutes ago        Running             volcano-controllers                      0                   73dfd9f4028b2       volcano-controllers-59cb4746db-h65gp
	0e4e924bbc311       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               2 minutes ago        Running             volcano-scheduler                        0                   9c05243cc9631       volcano-scheduler-844f6db89b-mqbc8
	4754a465abcdc       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   51013311477b9       snapshot-controller-745499f584-vtbgd
	113401e1fb4ba       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   ddde93b388c22       snapshot-controller-745499f584-nwfff
	fc935cf6a9e77       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   dffc9cc9d68b6       local-path-provisioner-8d985888d-vttq6
	7afb8a1ec864e       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        3 minutes ago        Running             yakd                                     0                   47694805b3e28       yakd-dashboard-799879c74f-8cf9k
	3605b11fd6199       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   3 minutes ago        Exited              patch                                    0                   aff0cba2b1114       ingress-nginx-admission-patch-5zn8c
	425674986c765       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   3 minutes ago        Exited              create                                   0                   01f07f53f8838       ingress-nginx-admission-create-g8trm
	479064a5abbb3       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  3 minutes ago        Running             tiller                                   0                   928a90131be75       tiller-deploy-6677d64bcd-4l99d
	664e74ab79992       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        3 minutes ago        Running             metrics-server                           0                   1b08cda80a8cc       metrics-server-c59844bb4-t9m2z
	979fd083785c1       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             3 minutes ago        Running             minikube-ingress-dns                     0                   626d2cdd8ef70       kube-ingress-dns-minikube
	1fca2609972a8       6e38f40d628db                                                                                                                                4 minutes ago        Running             storage-provisioner                      0                   9dd8550b8f658       storage-provisioner
	007efbcf98ead       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   fbdc39aab817d       coredns-7db6d8ff4d-plrcz
	69bbb641f5104       55bb025d2cfa5                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   f5182cd93f9c3       kube-proxy-j7wv2
	720564f28788a       76932a3b37d7e                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   26b7da16583bc       kube-controller-manager-addons-979300
	0674d129d7529       1f6d574d502f3                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   7763d8b4a79a7       kube-apiserver-addons-979300
	49b7289038585       3edc18e7b7672                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   6ec6346402c5b       kube-scheduler-addons-979300
	b3d2cb7f47da5       3861cfcd7c04c                                                                                                                                5 minutes ago        Running             etcd                                     0                   e727572f97d5c       etcd-addons-979300
	
	
	==> controller_ingress [5fdcdb7a3c55] <==
	W0721 23:33:14.002809       8 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0721 23:33:14.003040       8 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0721 23:33:14.011066       8 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.3" state="clean" commit="6fc0a69044f1ac4c13841ec4391224a2df241460" platform="linux/amd64"
	I0721 23:33:14.125954       8 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0721 23:33:14.153845       8 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0721 23:33:14.175200       8 nginx.go:271] "Starting NGINX Ingress controller"
	I0721 23:33:14.186778       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"de245052-4616-41e9-a8e9-e3e4d0af4d8d", APIVersion:"v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0721 23:33:14.190768       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"86c26b38-81db-481e-9620-24e843201d12", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0721 23:33:14.190827       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"d61726cc-acb8-4211-9b02-874871c6a92a", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0721 23:33:15.378657       8 nginx.go:317] "Starting NGINX process"
	I0721 23:33:15.379146       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0721 23:33:15.379462       8 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0721 23:33:15.379735       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0721 23:33:15.400754       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0721 23:33:15.402137       8 status.go:85] "New leader elected" identity="ingress-nginx-controller-6d9bd977d4-v6pkq"
	I0721 23:33:15.418773       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-6d9bd977d4-v6pkq" node="addons-979300"
	I0721 23:33:15.476686       8 controller.go:213] "Backend successfully reloaded"
	I0721 23:33:15.477073       8 controller.go:224] "Initial sync, sleeping for 1 second"
	I0721 23:33:15.477412       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6d9bd977d4-v6pkq", UID:"1b6654e0-1b9c-4ec4-9ab2-15a72c6e730e", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         7c44f992012555ff7f4e47c08d7c542ca9b4b1f7
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [007efbcf98ea] <==
	[INFO] 10.244.0.5:50869 - 34806 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000529604s
	[INFO] 10.244.0.5:43350 - 30513 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000116101s
	[INFO] 10.244.0.5:43350 - 49973 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000134601s
	[INFO] 10.244.0.5:59682 - 43359 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112501s
	[INFO] 10.244.0.5:59682 - 51804 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083501s
	[INFO] 10.244.0.5:36504 - 945 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000147901s
	[INFO] 10.244.0.5:36504 - 59052 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000083201s
	[INFO] 10.244.0.5:59444 - 40277 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000157702s
	[INFO] 10.244.0.5:59444 - 9577 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131502s
	[INFO] 10.244.0.5:52717 - 43 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094201s
	[INFO] 10.244.0.5:52717 - 9000 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000202602s
	[INFO] 10.244.0.5:39943 - 11136 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113401s
	[INFO] 10.244.0.5:39943 - 25229 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000149902s
	[INFO] 10.244.0.5:38804 - 18847 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000056001s
	[INFO] 10.244.0.5:38804 - 10137 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001339411s
	[INFO] 10.244.0.26:43301 - 56129 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000531903s
	[INFO] 10.244.0.26:37508 - 57444 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000098701s
	[INFO] 10.244.0.26:40557 - 56472 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000235402s
	[INFO] 10.244.0.26:58246 - 12439 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000532804s
	[INFO] 10.244.0.26:36569 - 9766 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0000987s
	[INFO] 10.244.0.26:53205 - 42763 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098401s
	[INFO] 10.244.0.26:41189 - 31309 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.001787313s
	[INFO] 10.244.0.26:33794 - 53128 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.001127208s
	[INFO] 10.244.0.27:44723 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000490604s
	[INFO] 10.244.0.27:39599 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000099101s
	
	
	==> describe nodes <==
	Name:               addons-979300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-979300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=addons-979300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_21T23_29_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-979300
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-979300"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 21 Jul 2024 23:29:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-979300
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 21 Jul 2024 23:35:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 21 Jul 2024 23:34:59 +0000   Sun, 21 Jul 2024 23:29:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 21 Jul 2024 23:34:59 +0000   Sun, 21 Jul 2024 23:29:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 21 Jul 2024 23:34:59 +0000   Sun, 21 Jul 2024 23:29:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 21 Jul 2024 23:34:59 +0000   Sun, 21 Jul 2024 23:29:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.202.6
	  Hostname:    addons-979300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 240d50dbef1a4b3d9786f31decd0a268
	  System UUID:                baf60677-979c-7f48-824f-6a0d0d27aeab
	  Boot ID:                    b250fbf5-5493-4cf8-9e1b-d63eeb5fc3f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-cww5g                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  gcp-auth                    gcp-auth-5db96cd9b4-dqzjj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  headlamp                    headlamp-7867546754-962bd                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-v6pkq    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         4m34s
	  kube-system                 coredns-7db6d8ff4d-plrcz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m3s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 csi-hostpathplugin-5vg4f                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 etcd-addons-979300                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-apiserver-addons-979300                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-controller-manager-addons-979300        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-proxy-j7wv2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-scheduler-addons-979300                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 metrics-server-c59844bb4-t9m2z               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m37s
	  kube-system                 snapshot-controller-745499f584-nwfff         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 snapshot-controller-745499f584-vtbgd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 tiller-deploy-6677d64bcd-4l99d               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  local-path-storage          local-path-provisioner-8d985888d-vttq6       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  my-volcano                  test-job-nginx-0                             1 (50%!)(MISSING)       1 (50%!)(MISSING)     0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  volcano-system              volcano-admission-5f7844f7bc-zjmsj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  volcano-system              volcano-controllers-59cb4746db-h65gp         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  volcano-system              volcano-scheduler-844f6db89b-mqbc8           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  yakd-dashboard              yakd-dashboard-799879c74f-8cf9k              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1950m (97%!)(MISSING)  1 (50%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m26s)  kubelet          Node addons-979300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m26s)  kubelet          Node addons-979300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m26s)  kubelet          Node addons-979300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m18s                  kubelet          Node addons-979300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s                  kubelet          Node addons-979300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s                  kubelet          Node addons-979300 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m17s                  kubelet          Node addons-979300 status is now: NodeReady
	  Normal  RegisteredNode           5m6s                   node-controller  Node addons-979300 event: Registered Node addons-979300 in Controller
	
	
	==> dmesg <==
	[  +5.669159] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.518856] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.596592] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.712396] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.060838] kauditd_printk_skb: 83 callbacks suppressed
	[Jul21 23:31] kauditd_printk_skb: 67 callbacks suppressed
	[ +29.255177] kauditd_printk_skb: 2 callbacks suppressed
	[Jul21 23:32] hrtimer: interrupt took 1493713 ns
	[  +0.529205] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.568724] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.822490] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.711952] kauditd_printk_skb: 34 callbacks suppressed
	[ +14.371122] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.432508] kauditd_printk_skb: 54 callbacks suppressed
	[Jul21 23:33] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.398657] kauditd_printk_skb: 47 callbacks suppressed
	[  +7.657514] kauditd_printk_skb: 4 callbacks suppressed
	[ +28.843845] kauditd_printk_skb: 79 callbacks suppressed
	[Jul21 23:34] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.405277] kauditd_printk_skb: 31 callbacks suppressed
	[  +7.382709] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.526236] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.219920] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.086058] kauditd_printk_skb: 42 callbacks suppressed
	[Jul21 23:35] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [b3d2cb7f47da] <==
	{"level":"warn","ts":"2024-07-21T23:34:39.525523Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.643357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-21T23:34:39.525781Z","caller":"traceutil/trace.go:171","msg":"trace[440178184] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1702; }","duration":"210.93316ms","start":"2024-07-21T23:34:39.314837Z","end":"2024-07-21T23:34:39.52577Z","steps":["trace[440178184] 'count revisions from in-memory index tree'  (duration: 210.536457ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-21T23:34:39.986816Z","caller":"traceutil/trace.go:171","msg":"trace[821595913] transaction","detail":"{read_only:false; response_revision:1704; number_of_response:1; }","duration":"356.353043ms","start":"2024-07-21T23:34:39.630443Z","end":"2024-07-21T23:34:39.986796Z","steps":["trace[821595913] 'process raft request'  (duration: 285.188515ms)","trace[821595913] 'compare'  (duration: 69.452012ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-21T23:34:39.986977Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-21T23:34:39.630422Z","time spent":"356.507444ms","remote":"127.0.0.1:37754","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2458,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/default/test-local-path\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/test-local-path\" value_size:2412 >> failure:<>"}
	{"level":"warn","ts":"2024-07-21T23:34:40.380134Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.42836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3962"}
	{"level":"info","ts":"2024-07-21T23:34:40.380192Z","caller":"traceutil/trace.go:171","msg":"trace[301654908] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1706; }","duration":"222.573262ms","start":"2024-07-21T23:34:40.157604Z","end":"2024-07-21T23:34:40.380178Z","steps":["trace[301654908] 'range keys from in-memory index tree'  (duration: 222.29906ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:34:40.380453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.510141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" ","response":"range_response_count:1 size:1069"}
	{"level":"info","ts":"2024-07-21T23:34:40.380595Z","caller":"traceutil/trace.go:171","msg":"trace[915207691] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1706; }","duration":"197.649442ms","start":"2024-07-21T23:34:40.182908Z","end":"2024-07-21T23:34:40.380558Z","steps":["trace[915207691] 'range keys from in-memory index tree'  (duration: 197.45744ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-21T23:34:40.614429Z","caller":"traceutil/trace.go:171","msg":"trace[412506389] linearizableReadLoop","detail":"{readStateIndex:1795; appliedIndex:1794; }","duration":"128.108629ms","start":"2024-07-21T23:34:40.486302Z","end":"2024-07-21T23:34:40.614411Z","steps":["trace[412506389] 'read index received'  (duration: 106.380938ms)","trace[412506389] 'applied index is now lower than readState.Index'  (duration: 21.726891ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-21T23:34:40.614525Z","caller":"traceutil/trace.go:171","msg":"trace[694721295] transaction","detail":"{read_only:false; response_revision:1708; number_of_response:1; }","duration":"133.474476ms","start":"2024-07-21T23:34:40.48104Z","end":"2024-07-21T23:34:40.614515Z","steps":["trace[694721295] 'process raft request'  (duration: 111.696584ms)","trace[694721295] 'compare'  (duration: 21.005086ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-21T23:34:40.614788Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.549633ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/pvc-271e3385-5895-4e4b-bd9d-59b933322d79\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-21T23:34:40.614816Z","caller":"traceutil/trace.go:171","msg":"trace[415915303] range","detail":"{range_begin:/registry/persistentvolumes/pvc-271e3385-5895-4e4b-bd9d-59b933322d79; range_end:; response_count:0; response_revision:1708; }","duration":"128.621134ms","start":"2024-07-21T23:34:40.486186Z","end":"2024-07-21T23:34:40.614807Z","steps":["trace[415915303] 'agreement among raft nodes before linearized reading'  (duration: 128.440832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:34:49.210386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.527111ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:3731"}
	{"level":"info","ts":"2024-07-21T23:34:49.210462Z","caller":"traceutil/trace.go:171","msg":"trace[559393868] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1747; }","duration":"302.678512ms","start":"2024-07-21T23:34:48.907769Z","end":"2024-07-21T23:34:49.210447Z","steps":["trace[559393868] 'range keys from in-memory index tree'  (duration: 302.415909ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:34:49.210489Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-21T23:34:48.907752Z","time spent":"302.729313ms","remote":"127.0.0.1:37754","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":3755,"request content":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" "}
	{"level":"warn","ts":"2024-07-21T23:34:49.210638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.973646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-271e3385-5895-4e4b-bd9d-59b933322d79\" ","response":"range_response_count:1 size:4202"}
	{"level":"info","ts":"2024-07-21T23:34:49.210663Z","caller":"traceutil/trace.go:171","msg":"trace[110621603] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-271e3385-5895-4e4b-bd9d-59b933322d79; range_end:; response_count:1; response_revision:1747; }","duration":"288.023847ms","start":"2024-07-21T23:34:48.922631Z","end":"2024-07-21T23:34:49.210655Z","steps":["trace[110621603] 'range keys from in-memory index tree'  (duration: 287.864645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:34:49.21087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.226182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-21T23:34:49.210892Z","caller":"traceutil/trace.go:171","msg":"trace[916659125] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1747; }","duration":"149.287482ms","start":"2024-07-21T23:34:49.061599Z","end":"2024-07-21T23:34:49.210886Z","steps":["trace[916659125] 'count revisions from in-memory index tree'  (duration: 149.150481ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-21T23:34:51.083119Z","caller":"traceutil/trace.go:171","msg":"trace[1961078378] linearizableReadLoop","detail":"{readStateIndex:1839; appliedIndex:1838; }","duration":"174.296862ms","start":"2024-07-21T23:34:50.908801Z","end":"2024-07-21T23:34:51.083098Z","steps":["trace[1961078378] 'read index received'  (duration: 174.06516ms)","trace[1961078378] 'applied index is now lower than readState.Index'  (duration: 231.002µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-21T23:34:51.083618Z","caller":"traceutil/trace.go:171","msg":"trace[1806206745] transaction","detail":"{read_only:false; response_revision:1749; number_of_response:1; }","duration":"190.848248ms","start":"2024-07-21T23:34:50.892753Z","end":"2024-07-21T23:34:51.083602Z","steps":["trace[1806206745] 'process raft request'  (duration: 190.164042ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:34:51.083874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.059069ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/test-pvc.17e45e846fcc3dd6\" ","response":"range_response_count:1 size:901"}
	{"level":"info","ts":"2024-07-21T23:34:51.083902Z","caller":"traceutil/trace.go:171","msg":"trace[865744777] range","detail":"{range_begin:/registry/events/default/test-pvc.17e45e846fcc3dd6; range_end:; response_count:1; response_revision:1749; }","duration":"175.125269ms","start":"2024-07-21T23:34:50.908769Z","end":"2024-07-21T23:34:51.083894Z","steps":["trace[865744777] 'agreement among raft nodes before linearized reading'  (duration: 175.017468ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-21T23:34:51.084055Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.072169ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/my-volcano/\" range_end:\"/registry/pods/my-volcano0\" ","response":"range_response_count:1 size:3731"}
	{"level":"info","ts":"2024-07-21T23:34:51.084082Z","caller":"traceutil/trace.go:171","msg":"trace[1743135217] range","detail":"{range_begin:/registry/pods/my-volcano/; range_end:/registry/pods/my-volcano0; response_count:1; response_revision:1749; }","duration":"175.111269ms","start":"2024-07-21T23:34:50.908965Z","end":"2024-07-21T23:34:51.084076Z","steps":["trace[1743135217] 'agreement among raft nodes before linearized reading'  (duration: 175.041768ms)"],"step_count":1}
	
	
	==> gcp-auth [3d79454f4248] <==
	2024/07/21 23:34:10 GCP Auth Webhook started!
	2024/07/21 23:34:24 Ready to marshal response ...
	2024/07/21 23:34:24 Ready to write response ...
	2024/07/21 23:34:27 Ready to marshal response ...
	2024/07/21 23:34:27 Ready to write response ...
	2024/07/21 23:34:28 Ready to marshal response ...
	2024/07/21 23:34:28 Ready to write response ...
	2024/07/21 23:34:28 Ready to marshal response ...
	2024/07/21 23:34:28 Ready to write response ...
	2024/07/21 23:34:29 Ready to marshal response ...
	2024/07/21 23:34:29 Ready to write response ...
	2024/07/21 23:34:30 Ready to marshal response ...
	2024/07/21 23:34:30 Ready to write response ...
	2024/07/21 23:34:39 Ready to marshal response ...
	2024/07/21 23:34:39 Ready to write response ...
	2024/07/21 23:34:40 Ready to marshal response ...
	2024/07/21 23:34:40 Ready to write response ...
	
	
	==> kernel <==
	 23:35:11 up 7 min,  0 users,  load average: 2.24, 2.16, 1.09
	Linux addons-979300 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0674d129d752] <==
	W0721 23:33:08.266314       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:09.281018       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:10.343124       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:11.397005       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:12.469745       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:13.506014       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:14.585235       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:15.682528       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:16.742146       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:17.786031       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:18.861930       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:19.964005       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:21.038590       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:22.058598       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:23.114735       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:24.192983       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.153.171:443: connect: connection refused
	W0721 23:33:33.802830       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.200.229:443: connect: connection refused
	E0721 23:33:33.802888       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.200.229:443: connect: connection refused
	W0721 23:33:52.088019       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.200.229:443: connect: connection refused
	E0721 23:33:52.088264       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.200.229:443: connect: connection refused
	W0721 23:33:52.096733       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.200.229:443: connect: connection refused
	E0721 23:33:52.097006       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.200.229:443: connect: connection refused
	I0721 23:34:27.964599       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.59.93"}
	I0721 23:34:29.641001       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0721 23:34:29.733785       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [720564f28788] <==
	I0721 23:33:56.608907       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0721 23:33:56.765071       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0721 23:33:56.781015       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0721 23:33:56.796675       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0721 23:33:56.866991       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0721 23:33:57.617670       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0721 23:33:57.635981       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0721 23:33:57.652192       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0721 23:33:57.676052       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0721 23:34:11.116066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="21.653355ms"
	I0721 23:34:11.116605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="85.001µs"
	I0721 23:34:26.041897       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0721 23:34:26.165528       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0721 23:34:27.019311       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0721 23:34:27.101894       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0721 23:34:28.143956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="114.97508ms"
	I0721 23:34:28.171485       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="25.239993ms"
	I0721 23:34:28.171722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="127.301µs"
	I0721 23:34:28.183705       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="42.6µs"
	I0721 23:34:29.176617       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init"
	I0721 23:34:41.723213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="79.3µs"
	I0721 23:34:41.829786       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="42.508974ms"
	I0721 23:34:41.829904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7867546754" duration="59µs"
	I0721 23:34:48.351294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-656c9c8d9c" duration="4.201µs"
	I0721 23:35:09.805564       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-6fcd4f6f98" duration="4.9µs"
	
	
	==> kube-proxy [69bbb641f510] <==
	I0721 23:30:19.092973       1 server_linux.go:69] "Using iptables proxy"
	I0721 23:30:19.308915       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.202.6"]
	I0721 23:30:19.641035       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0721 23:30:19.641189       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0721 23:30:19.641224       1 server_linux.go:165] "Using iptables Proxier"
	I0721 23:30:19.723578       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0721 23:30:19.736828       1 server.go:872] "Version info" version="v1.30.3"
	I0721 23:30:19.736923       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0721 23:30:19.739852       1 config.go:192] "Starting service config controller"
	I0721 23:30:19.739888       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0721 23:30:19.739934       1 config.go:101] "Starting endpoint slice config controller"
	I0721 23:30:19.739950       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0721 23:30:19.740940       1 config.go:319] "Starting node config controller"
	I0721 23:30:19.740990       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0721 23:30:19.848184       1 shared_informer.go:320] Caches are synced for node config
	I0721 23:30:19.848320       1 shared_informer.go:320] Caches are synced for service config
	I0721 23:30:19.848445       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [49b728903858] <==
	W0721 23:29:50.575589       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0721 23:29:50.575671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0721 23:29:50.640507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0721 23:29:50.640720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0721 23:29:50.650533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0721 23:29:50.650732       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0721 23:29:50.704914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0721 23:29:50.705115       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0721 23:29:50.715068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0721 23:29:50.715252       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0721 23:29:50.804865       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0721 23:29:50.805048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0721 23:29:50.820793       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0721 23:29:50.820898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0721 23:29:50.822204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0721 23:29:50.822258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0721 23:29:50.863114       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0721 23:29:50.863522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0721 23:29:51.051445       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0721 23:29:51.051833       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0721 23:29:51.293450       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0721 23:29:51.293580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0721 23:29:51.350552       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0721 23:29:51.350874       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0721 23:29:54.235027       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 21 23:34:59 addons-979300 kubelet[2258]: I0721 23:34:59.056916    2258 memory_manager.go:354] "RemoveStaleState removing state" podUID="46398a1d-dacc-4292-8984-e35ae91f0e91" containerName="registry"
	Jul 21 23:34:59 addons-979300 kubelet[2258]: I0721 23:34:59.057044    2258 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb5100be-763d-433b-8160-6551a6e6c4ed" containerName="registry-proxy"
	Jul 21 23:34:59 addons-979300 kubelet[2258]: I0721 23:34:59.057181    2258 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dae3253-f3c7-40f3-9235-cac59a77cb09" containerName="helper-pod"
	Jul 21 23:34:59 addons-979300 kubelet[2258]: I0721 23:34:59.181829    2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz2hg\" (UniqueName: \"kubernetes.io/projected/14aca3e9-3e3e-4b20-915f-60844f7d390c-kube-api-access-mz2hg\") pod \"test-local-path\" (UID: \"14aca3e9-3e3e-4b20-915f-60844f7d390c\") " pod="default/test-local-path"
	Jul 21 23:34:59 addons-979300 kubelet[2258]: I0721 23:34:59.182109    2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-271e3385-5895-4e4b-bd9d-59b933322d79\" (UniqueName: \"kubernetes.io/host-path/14aca3e9-3e3e-4b20-915f-60844f7d390c-pvc-271e3385-5895-4e4b-bd9d-59b933322d79\") pod \"test-local-path\" (UID: \"14aca3e9-3e3e-4b20-915f-60844f7d390c\") " pod="default/test-local-path"
	Jul 21 23:34:59 addons-979300 kubelet[2258]: I0721 23:34:59.182215    2258 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/14aca3e9-3e3e-4b20-915f-60844f7d390c-gcp-creds\") pod \"test-local-path\" (UID: \"14aca3e9-3e3e-4b20-915f-60844f7d390c\") " pod="default/test-local-path"
	Jul 21 23:34:59 addons-979300 kubelet[2258]: I0721 23:34:59.287720    2258 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dae3253-f3c7-40f3-9235-cac59a77cb09" path="/var/lib/kubelet/pods/3dae3253-f3c7-40f3-9235-cac59a77cb09/volumes"
	Jul 21 23:35:03 addons-979300 kubelet[2258]: I0721 23:35:03.637878    2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/14aca3e9-3e3e-4b20-915f-60844f7d390c-gcp-creds\") pod \"14aca3e9-3e3e-4b20-915f-60844f7d390c\" (UID: \"14aca3e9-3e3e-4b20-915f-60844f7d390c\") "
	Jul 21 23:35:03 addons-979300 kubelet[2258]: I0721 23:35:03.637964    2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14aca3e9-3e3e-4b20-915f-60844f7d390c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "14aca3e9-3e3e-4b20-915f-60844f7d390c" (UID: "14aca3e9-3e3e-4b20-915f-60844f7d390c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 21 23:35:03 addons-979300 kubelet[2258]: I0721 23:35:03.638988    2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz2hg\" (UniqueName: \"kubernetes.io/projected/14aca3e9-3e3e-4b20-915f-60844f7d390c-kube-api-access-mz2hg\") pod \"14aca3e9-3e3e-4b20-915f-60844f7d390c\" (UID: \"14aca3e9-3e3e-4b20-915f-60844f7d390c\") "
	Jul 21 23:35:03 addons-979300 kubelet[2258]: I0721 23:35:03.639217    2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/14aca3e9-3e3e-4b20-915f-60844f7d390c-pvc-271e3385-5895-4e4b-bd9d-59b933322d79\") pod \"14aca3e9-3e3e-4b20-915f-60844f7d390c\" (UID: \"14aca3e9-3e3e-4b20-915f-60844f7d390c\") "
	Jul 21 23:35:03 addons-979300 kubelet[2258]: I0721 23:35:03.639513    2258 reconciler_common.go:289] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/14aca3e9-3e3e-4b20-915f-60844f7d390c-gcp-creds\") on node \"addons-979300\" DevicePath \"\""
	Jul 21 23:35:03 addons-979300 kubelet[2258]: I0721 23:35:03.639646    2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14aca3e9-3e3e-4b20-915f-60844f7d390c-pvc-271e3385-5895-4e4b-bd9d-59b933322d79" (OuterVolumeSpecName: "data") pod "14aca3e9-3e3e-4b20-915f-60844f7d390c" (UID: "14aca3e9-3e3e-4b20-915f-60844f7d390c"). InnerVolumeSpecName "pvc-271e3385-5895-4e4b-bd9d-59b933322d79". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 21 23:35:03 addons-979300 kubelet[2258]: I0721 23:35:03.647236    2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14aca3e9-3e3e-4b20-915f-60844f7d390c-kube-api-access-mz2hg" (OuterVolumeSpecName: "kube-api-access-mz2hg") pod "14aca3e9-3e3e-4b20-915f-60844f7d390c" (UID: "14aca3e9-3e3e-4b20-915f-60844f7d390c"). InnerVolumeSpecName "kube-api-access-mz2hg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 21 23:35:03 addons-979300 kubelet[2258]: I0721 23:35:03.740172    2258 reconciler_common.go:289] "Volume detached for volume \"pvc-271e3385-5895-4e4b-bd9d-59b933322d79\" (UniqueName: \"kubernetes.io/host-path/14aca3e9-3e3e-4b20-915f-60844f7d390c-pvc-271e3385-5895-4e4b-bd9d-59b933322d79\") on node \"addons-979300\" DevicePath \"\""
	Jul 21 23:35:03 addons-979300 kubelet[2258]: I0721 23:35:03.740317    2258 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mz2hg\" (UniqueName: \"kubernetes.io/projected/14aca3e9-3e3e-4b20-915f-60844f7d390c-kube-api-access-mz2hg\") on node \"addons-979300\" DevicePath \"\""
	Jul 21 23:35:04 addons-979300 kubelet[2258]: I0721 23:35:04.179480    2258 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de79741a640a1df0e986bffdbb68508aac45dcccf54de7b6b7accebd61681697"
	Jul 21 23:35:10 addons-979300 kubelet[2258]: I0721 23:35:10.413186    2258 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq9hr\" (UniqueName: \"kubernetes.io/projected/100c04c1-c805-47b0-ba3c-78f297801304-kube-api-access-lq9hr\") pod \"100c04c1-c805-47b0-ba3c-78f297801304\" (UID: \"100c04c1-c805-47b0-ba3c-78f297801304\") "
	Jul 21 23:35:10 addons-979300 kubelet[2258]: I0721 23:35:10.431935    2258 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100c04c1-c805-47b0-ba3c-78f297801304-kube-api-access-lq9hr" (OuterVolumeSpecName: "kube-api-access-lq9hr") pod "100c04c1-c805-47b0-ba3c-78f297801304" (UID: "100c04c1-c805-47b0-ba3c-78f297801304"). InnerVolumeSpecName "kube-api-access-lq9hr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 21 23:35:10 addons-979300 kubelet[2258]: I0721 23:35:10.433112    2258 scope.go:117] "RemoveContainer" containerID="560a5ac3cf6ebff5d8dda8bb5cd429dbbd7989219fbd0eeef67fd5bd2ca813db"
	Jul 21 23:35:10 addons-979300 kubelet[2258]: I0721 23:35:10.514272    2258 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lq9hr\" (UniqueName: \"kubernetes.io/projected/100c04c1-c805-47b0-ba3c-78f297801304-kube-api-access-lq9hr\") on node \"addons-979300\" DevicePath \"\""
	Jul 21 23:35:10 addons-979300 kubelet[2258]: I0721 23:35:10.536890    2258 scope.go:117] "RemoveContainer" containerID="560a5ac3cf6ebff5d8dda8bb5cd429dbbd7989219fbd0eeef67fd5bd2ca813db"
	Jul 21 23:35:10 addons-979300 kubelet[2258]: E0721 23:35:10.539408    2258 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 560a5ac3cf6ebff5d8dda8bb5cd429dbbd7989219fbd0eeef67fd5bd2ca813db" containerID="560a5ac3cf6ebff5d8dda8bb5cd429dbbd7989219fbd0eeef67fd5bd2ca813db"
	Jul 21 23:35:10 addons-979300 kubelet[2258]: I0721 23:35:10.539449    2258 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"560a5ac3cf6ebff5d8dda8bb5cd429dbbd7989219fbd0eeef67fd5bd2ca813db"} err="failed to get container status \"560a5ac3cf6ebff5d8dda8bb5cd429dbbd7989219fbd0eeef67fd5bd2ca813db\": rpc error: code = Unknown desc = Error response from daemon: No such container: 560a5ac3cf6ebff5d8dda8bb5cd429dbbd7989219fbd0eeef67fd5bd2ca813db"
	Jul 21 23:35:11 addons-979300 kubelet[2258]: I0721 23:35:11.257203    2258 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100c04c1-c805-47b0-ba3c-78f297801304" path="/var/lib/kubelet/pods/100c04c1-c805-47b0-ba3c-78f297801304/volumes"
	
	
	==> storage-provisioner [1fca2609972a] <==
	I0721 23:30:44.699481       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0721 23:30:44.813863       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0721 23:30:44.814078       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0721 23:30:44.985545       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0721 23:30:44.985969       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-979300_2c270d06-f192-45ae-ab7f-a0c01660e2ca!
	I0721 23:30:45.036599       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"45b9c7e3-f5d8-4a8b-9cef-db4f4f8dc71d", APIVersion:"v1", ResourceVersion:"867", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-979300_2c270d06-f192-45ae-ab7f-a0c01660e2ca became leader
	I0721 23:30:45.190108       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-979300_2c270d06-f192-45ae-ab7f-a0c01660e2ca!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:35:01.931973     760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-979300 -n addons-979300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-979300 -n addons-979300: (13.3705563s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-979300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-g8trm ingress-nginx-admission-patch-5zn8c
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-979300 describe pod ingress-nginx-admission-create-g8trm ingress-nginx-admission-patch-5zn8c
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-979300 describe pod ingress-nginx-admission-create-g8trm ingress-nginx-admission-patch-5zn8c: exit status 1 (171.9385ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g8trm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5zn8c" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-979300 describe pod ingress-nginx-admission-create-g8trm ingress-nginx-admission-patch-5zn8c: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.30s)

                                                
                                    
x
+
TestErrorSpam/setup (200.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-420400 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-420400 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 --driver=hyperv: (3m20.3448205s)
error_spam_test.go:96: unexpected stderr: "W0721 23:39:21.874903    9824 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-420400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=19312
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-420400" primary control-plane node in "nospam-420400" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-420400" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0721 23:39:21.874903    9824 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (200.35s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (345.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-264400 --alsologtostderr -v=8
E0721 23:49:39.667413    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-264400 --alsologtostderr -v=8: exit status 90 (2m31.7292119s)

                                                
                                                
-- stdout --
	* [functional-264400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-264400" primary control-plane node in "functional-264400" cluster
	* Updating the running hyperv "functional-264400" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:49:13.176021    3296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0721 23:49:13.243734    3296 out.go:291] Setting OutFile to fd 632 ...
	I0721 23:49:13.245089    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:49:13.245089    3296 out.go:304] Setting ErrFile to fd 612...
	I0721 23:49:13.245225    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:49:13.271799    3296 out.go:298] Setting JSON to false
	I0721 23:49:13.274576    3296 start.go:129] hostinfo: {"hostname":"minikube6","uptime":120960,"bootTime":1721484792,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0721 23:49:13.275656    3296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 23:49:13.279846    3296 out.go:177] * [functional-264400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0721 23:49:13.284743    3296 notify.go:220] Checking for updates...
	I0721 23:49:13.286577    3296 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0721 23:49:13.288761    3296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:49:13.292203    3296 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0721 23:49:13.295335    3296 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:49:13.299523    3296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:49:13.304221    3296 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:49:13.304533    3296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:49:18.732833    3296 out.go:177] * Using the hyperv driver based on existing profile
	I0721 23:49:18.737459    3296 start.go:297] selected driver: hyperv
	I0721 23:49:18.737459    3296 start.go:901] validating driver "hyperv" against &{Name:functional-264400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-264400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.97 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:49:18.737459    3296 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:49:18.788011    3296 cni.go:84] Creating CNI manager for ""
	I0721 23:49:18.788078    3296 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 23:49:18.788279    3296 start.go:340] cluster config:
	{Name:functional-264400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-264400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.97 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:49:18.788712    3296 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:49:18.793652    3296 out.go:177] * Starting "functional-264400" primary control-plane node in "functional-264400" cluster
	I0721 23:49:18.796373    3296 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 23:49:18.796558    3296 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 23:49:18.796558    3296 cache.go:56] Caching tarball of preloaded images
	I0721 23:49:18.796558    3296 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 23:49:18.796558    3296 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 23:49:18.797352    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\config.json ...
	I0721 23:49:18.798980    3296 start.go:360] acquireMachinesLock for functional-264400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:49:18.798980    3296 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-264400"
	I0721 23:49:18.798980    3296 start.go:96] Skipping create...Using existing machine configuration
	I0721 23:49:18.799979    3296 fix.go:54] fixHost starting: 
	I0721 23:49:18.799979    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:21.623488    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:21.623488    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:21.623488    3296 fix.go:112] recreateIfNeeded on functional-264400: state=Running err=<nil>
	W0721 23:49:21.623488    3296 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 23:49:21.629305    3296 out.go:177] * Updating the running hyperv "functional-264400" VM ...
	I0721 23:49:21.631533    3296 machine.go:94] provisionDockerMachine start ...
	I0721 23:49:21.631533    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:23.852288    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:23.852288    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:23.852522    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:26.470028    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:26.470028    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:26.478442    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:26.479167    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:26.479167    3296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0721 23:49:26.622339    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-264400
	
	I0721 23:49:26.622466    3296 buildroot.go:166] provisioning hostname "functional-264400"
	I0721 23:49:26.622607    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:28.824177    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:28.824557    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:28.824557    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:31.414467    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:31.414467    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:31.420319    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:31.421099    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:31.421099    3296 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-264400 && echo "functional-264400" | sudo tee /etc/hostname
	I0721 23:49:31.588774    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-264400
	
	I0721 23:49:31.588774    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:33.790066    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:33.790474    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:33.790474    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:36.394275    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:36.394275    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:36.399837    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:36.400299    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:36.400299    3296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-264400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-264400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-264400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:49:36.533255    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:49:36.533255    3296 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0721 23:49:36.533255    3296 buildroot.go:174] setting up certificates
	I0721 23:49:36.533255    3296 provision.go:84] configureAuth start
	I0721 23:49:36.533977    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:38.735744    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:38.735744    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:38.736834    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:41.319431    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:41.319431    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:41.319569    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:46.052701    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:46.052760    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:46.052760    3296 provision.go:143] copyHostCerts
	I0721 23:49:46.052760    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0721 23:49:46.053530    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0721 23:49:46.053530    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0721 23:49:46.054196    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0721 23:49:46.055555    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0721 23:49:46.055555    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0721 23:49:46.055555    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0721 23:49:46.056169    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0721 23:49:46.056925    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0721 23:49:46.057723    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0721 23:49:46.057723    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0721 23:49:46.057723    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0721 23:49:46.059166    3296 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-264400 san=[127.0.0.1 172.28.193.97 functional-264400 localhost minikube]
	I0721 23:49:46.255062    3296 provision.go:177] copyRemoteCerts
	I0721 23:49:46.265961    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:49:46.265961    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:48.459327    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:48.459802    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:48.459881    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:51.076501    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:51.076501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:51.078136    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:49:51.186062    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.919744s)
	I0721 23:49:51.186137    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0721 23:49:51.186285    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0721 23:49:51.234406    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0721 23:49:51.234628    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0721 23:49:51.286824    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0721 23:49:51.286998    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0721 23:49:51.338595    3296 provision.go:87] duration metric: took 14.8051233s to configureAuth
	I0721 23:49:51.338595    3296 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:49:51.339337    3296 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:49:51.339480    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:53.533831    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:53.534329    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:53.534329    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:56.137211    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:56.137352    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:56.143046    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:56.143046    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:56.143046    3296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0721 23:49:56.285359    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0721 23:49:56.285421    3296 buildroot.go:70] root file system type: tmpfs
	I0721 23:49:56.285723    3296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0721 23:49:56.285723    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:58.466808    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:58.466808    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:58.467788    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:01.029845    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:01.030501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:01.036243    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:01.036485    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:01.036485    3296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0721 23:50:01.200267    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0721 23:50:01.200414    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:06.025859    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:06.025859    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:06.031850    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:06.032255    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:06.032255    3296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0721 23:50:06.194379    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:50:06.194379    3296 machine.go:97] duration metric: took 44.5622996s to provisionDockerMachine
	I0721 23:50:06.194379    3296 start.go:293] postStartSetup for "functional-264400" (driver="hyperv")
	I0721 23:50:06.194379    3296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:50:06.209650    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:50:06.209650    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:08.393053    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:08.393053    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:08.393698    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:10.989526    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:10.989526    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:10.989613    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:11.100095    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8903841s)
	I0721 23:50:11.113697    3296 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:50:11.120917    3296 command_runner.go:130] > NAME=Buildroot
	I0721 23:50:11.120917    3296 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0721 23:50:11.120917    3296 command_runner.go:130] > ID=buildroot
	I0721 23:50:11.120917    3296 command_runner.go:130] > VERSION_ID=2023.02.9
	I0721 23:50:11.120999    3296 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0721 23:50:11.121050    3296 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:50:11.121050    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0721 23:50:11.121510    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0721 23:50:11.122518    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0721 23:50:11.122575    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0721 23:50:11.123543    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts -> hosts in /etc/test/nested/copy/5100
	I0721 23:50:11.123618    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts -> /etc/test/nested/copy/5100/hosts
	I0721 23:50:11.133586    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/5100
	I0721 23:50:11.152687    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0721 23:50:11.202971    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts --> /etc/test/nested/copy/5100/hosts (40 bytes)
	I0721 23:50:11.255289    3296 start.go:296] duration metric: took 5.0608472s for postStartSetup
	I0721 23:50:11.255289    3296 fix.go:56] duration metric: took 52.4546661s for fixHost
	I0721 23:50:11.255289    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:13.434592    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:13.434592    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:13.435305    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:16.055310    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:16.055310    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:16.061461    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:16.061461    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:16.061461    3296 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0721 23:50:16.203294    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605816.220500350
	
	I0721 23:50:16.203389    3296 fix.go:216] guest clock: 1721605816.220500350
	I0721 23:50:16.203389    3296 fix.go:229] Guest: 2024-07-21 23:50:16.22050035 +0000 UTC Remote: 2024-07-21 23:50:11.2552893 +0000 UTC m=+58.166615301 (delta=4.96521105s)
	I0721 23:50:16.203490    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:18.378670    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:18.378670    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:18.378758    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:21.010405    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:21.010405    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:21.016091    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:21.016289    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:21.016289    3296 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721605816
	I0721 23:50:21.170845    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Jul 21 23:50:16 UTC 2024
	
	I0721 23:50:21.171182    3296 fix.go:236] clock set: Sun Jul 21 23:50:16 UTC 2024
	 (err=<nil>)
	I0721 23:50:21.171182    3296 start.go:83] releasing machines lock for "functional-264400", held for 1m2.3714351s
	I0721 23:50:21.171265    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:23.395806    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:23.395850    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:23.395850    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:26.024178    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:26.024178    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:26.028577    3296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0721 23:50:26.028739    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:26.043523    3296 ssh_runner.go:195] Run: cat /version.json
	I0721 23:50:26.043523    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:28.404030    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:28.404030    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:31.161323    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:31.161323    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:31.162685    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:31.218457    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:31.219166    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:31.219224    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:31.264926    3296 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0721 23:50:31.265653    3296 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.236833s)
	W0721 23:50:31.265743    3296 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0721 23:50:31.313118    3296 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0721 23:50:31.313718    3296 ssh_runner.go:235] Completed: cat /version.json: (5.2701285s)
	I0721 23:50:31.326430    3296 ssh_runner.go:195] Run: systemctl --version
	I0721 23:50:31.335559    3296 command_runner.go:130] > systemd 252 (252)
	I0721 23:50:31.335630    3296 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0721 23:50:31.347271    3296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0721 23:50:31.356110    3296 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0721 23:50:31.356110    3296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:50:31.367122    3296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0721 23:50:31.377018    3296 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0721 23:50:31.377190    3296 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0721 23:50:31.390830    3296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0721 23:50:31.390919    3296 start.go:495] detecting cgroup driver to use...
	I0721 23:50:31.391177    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:50:31.430605    3296 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0721 23:50:31.443163    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0721 23:50:31.473200    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0721 23:50:31.495064    3296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0721 23:50:31.505345    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0721 23:50:31.537330    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:50:31.570237    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0721 23:50:31.603641    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:50:31.634289    3296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:50:31.667749    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0721 23:50:31.699347    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0721 23:50:31.728970    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0721 23:50:31.758020    3296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:50:31.777872    3296 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0721 23:50:31.788667    3296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:50:31.817394    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:50:32.095962    3296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0721 23:50:32.129994    3296 start.go:495] detecting cgroup driver to use...
	I0721 23:50:32.144084    3296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0721 23:50:32.171240    3296 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0721 23:50:32.171513    3296 command_runner.go:130] > [Unit]
	I0721 23:50:32.171513    3296 command_runner.go:130] > Description=Docker Application Container Engine
	I0721 23:50:32.171513    3296 command_runner.go:130] > Documentation=https://docs.docker.com
	I0721 23:50:32.171513    3296 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0721 23:50:32.171513    3296 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0721 23:50:32.171626    3296 command_runner.go:130] > StartLimitBurst=3
	I0721 23:50:32.171626    3296 command_runner.go:130] > StartLimitIntervalSec=60
	I0721 23:50:32.171626    3296 command_runner.go:130] > [Service]
	I0721 23:50:32.171626    3296 command_runner.go:130] > Type=notify
	I0721 23:50:32.171626    3296 command_runner.go:130] > Restart=on-failure
	I0721 23:50:32.171626    3296 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0721 23:50:32.171697    3296 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0721 23:50:32.171697    3296 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0721 23:50:32.171697    3296 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0721 23:50:32.171697    3296 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0721 23:50:32.171697    3296 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0721 23:50:32.171763    3296 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0721 23:50:32.171763    3296 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0721 23:50:32.171763    3296 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0721 23:50:32.171763    3296 command_runner.go:130] > ExecStart=
	I0721 23:50:32.171845    3296 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0721 23:50:32.171845    3296 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0721 23:50:32.171845    3296 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0721 23:50:32.171910    3296 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0721 23:50:32.171935    3296 command_runner.go:130] > LimitNOFILE=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > LimitNPROC=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > LimitCORE=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0721 23:50:32.171968    3296 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0721 23:50:32.171968    3296 command_runner.go:130] > TasksMax=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > TimeoutStartSec=0
	I0721 23:50:32.171968    3296 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0721 23:50:32.171968    3296 command_runner.go:130] > Delegate=yes
	I0721 23:50:32.171968    3296 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0721 23:50:32.171968    3296 command_runner.go:130] > KillMode=process
	I0721 23:50:32.171968    3296 command_runner.go:130] > [Install]
	I0721 23:50:32.171968    3296 command_runner.go:130] > WantedBy=multi-user.target
	I0721 23:50:32.185323    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:50:32.222308    3296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:50:32.269127    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:50:32.308679    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 23:50:32.337720    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:50:32.373754    3296 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0721 23:50:32.387559    3296 ssh_runner.go:195] Run: which cri-dockerd
	I0721 23:50:32.393567    3296 command_runner.go:130] > /usr/bin/cri-dockerd
	I0721 23:50:32.407091    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0721 23:50:32.429103    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0721 23:50:32.473119    3296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0721 23:50:32.747171    3296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0721 23:50:32.998956    3296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0721 23:50:32.999296    3296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0721 23:50:33.051719    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:50:33.356719    3296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 23:51:44.652209    3296 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0721 23:51:44.652633    3296 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0721 23:51:44.654236    3296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.2965818s)
	I0721 23:51:44.666167    3296 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.168118118Z" level=info msg="Starting up"
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.169181481Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.170711772Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.204506281Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239101537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239202743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239269947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239286548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239363452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239504161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239689572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239796878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239818179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239829580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240023691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240532022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243523700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243618405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244010128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244130936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244288745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244514558Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274608247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274731654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274757156Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274774157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274806859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.698341    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275350391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.698341    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275567104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275667010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275688011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275707112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275721313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275742514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275764116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275780417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275794017Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275807418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698670    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275819619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275876422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275890923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698817    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275939726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698817    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275958027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275970928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275983929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275997230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276018931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276036232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276049233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276066634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276084135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276105336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699059    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276119437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699059    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276132038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276357651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276454457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276513660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699204    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276580764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.699204    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276655869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699260    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276712372Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.699260    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276762075Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277188900Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277433015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277589224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.278054352Z" level=info msg="containerd successfully booted in 0.074903s"
	I0721 23:51:44.699388    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.247751721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.277834397Z" level=info msg="Loading containers: start."
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.441509517Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.655815314Z" level=info msg="Loading containers: done."
	I0721 23:51:44.699472    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676595884Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.699498    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676745891Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.699498    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788327964Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788443669Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.978875672Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980386251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980770345Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980878444Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980936643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.044964117Z" level=info msg="Starting up"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.046051302Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.047547081Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1095
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.077138071Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103738503Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103854902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103894101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103909101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103931301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103942600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104085398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104215897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104236396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104246796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104289796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104467393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108266041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108366439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108599936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108922331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109041730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109088329Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109284326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109335126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109351726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109365825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109378125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109446524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110271513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110431611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110840005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110866105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110891004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110910804Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110947503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700871    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110983003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700871    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111002703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111019702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111038702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111054502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111096201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111137101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111175900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111189300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111205600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111236299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111251899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111274399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111294599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111330498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111345998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111376797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111394397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111421297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111457096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111535995Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111594594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111638794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111653394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111706593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111722293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111736992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111747992Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112862377Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112947276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113020375Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113041274Z" level=info msg="containerd successfully booted in 0.036788s"
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.102172085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.122803299Z" level=info msg="Loading containers: start."
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.249728942Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.363421569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.454819504Z" level=info msg="Loading containers: done."
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478314979Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478440677Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523349955Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.701834    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523496853Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.403414153Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.404940232Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405762121Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405911219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405963218Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.702027    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 dockerd[1439]: time="2024-07-21T23:48:27.488211040Z" level=info msg="Starting up"
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.283164837Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.284334421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1445
	I0721 23:51:44.702206    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.322546392Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.353969657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354127155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354245353Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.702289    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354279453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354386052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354424751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354988043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355091642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355116141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355128941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355204740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355558335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359334983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359441882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359612579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359749577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359878975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359993174Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360138772Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360266770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.702688    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360289170Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360306770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360434168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360490167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360944161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361072859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361207757Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361229957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361245657Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361275356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361389255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361429254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361568652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361594052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361609452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361622451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361656951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361680651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361901447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361999446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362019946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362046645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362061445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362075845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362092245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362111045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362124244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362136944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362154644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362178044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362192043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362211643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362342741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362390341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362406041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362418640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362429040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362444140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.703684    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362455640Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.703715    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362742536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.703715    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362893434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362971133Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362995232Z" level=info msg="containerd successfully booted in 0.041146s"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:29 functional-264400 dockerd[1439]: time="2024-07-21T23:48:29.329544955Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.660319456Z" level=info msg="Loading containers: start."
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.796232675Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.907798631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.001144539Z" level=info msg="Loading containers: done."
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022589743Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022719941Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067087927Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067159926Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203705562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203993309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204174339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204501992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275055860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275220587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275259793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275372211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333574371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333646683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333744099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333850816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416645674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416770094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416839505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.417133553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625603538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625875582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704442    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625899586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704622    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.626009704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704779    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776176512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776370643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776546172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.835904420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836147160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836225472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836649541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887079538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887333179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887543914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887899671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.134772975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141087657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141198860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141750876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576099088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576165990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576179490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576332795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.700943823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701110428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701133028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705465    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701305233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705465    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.251787691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705516    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252007895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705516    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252034496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252193199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.458949480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459063270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459134864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459296351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733493277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733949139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734221216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734462295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.409481815Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.656026383Z" level=info msg="ignoring event" container=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.657306959Z" level=info msg="shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658560636Z" level=warning msg="cleaning up after shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658678934Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.676709403Z" level=info msg="ignoring event" container=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.677164894Z" level=info msg="shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678209575Z" level=warning msg="cleaning up after shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678304373Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.695081165Z" level=info msg="ignoring event" container=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695302161Z" level=info msg="shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695385859Z" level=warning msg="cleaning up after shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695446458Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.701015856Z" level=info msg="ignoring event" container=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.702594427Z" level=info msg="shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704149698Z" level=warning msg="cleaning up after shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	I0721 23:51:44.706258    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704221897Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.728693847Z" level=info msg="shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729328035Z" level=info msg="ignoring event" container=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729433134Z" level=info msg="ignoring event" container=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.731072903Z" level=warning msg="cleaning up after shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.734341743Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.734844834Z" level=info msg="ignoring event" container=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.735006831Z" level=info msg="ignoring event" container=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.743975166Z" level=info msg="shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744093164Z" level=warning msg="cleaning up after shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744205762Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.730359917Z" level=info msg="shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751792823Z" level=warning msg="cleaning up after shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751834022Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759660178Z" level=info msg="ignoring event" container=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759862574Z" level=info msg="ignoring event" container=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760069570Z" level=info msg="ignoring event" container=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760281966Z" level=info msg="ignoring event" container=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760277567Z" level=info msg="shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760380865Z" level=warning msg="cleaning up after shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760394364Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.748823577Z" level=info msg="shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765443172Z" level=warning msg="cleaning up after shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765461071Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769325900Z" level=info msg="shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769546096Z" level=warning msg="cleaning up after shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769827691Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706951    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774921997Z" level=info msg="shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	I0721 23:51:44.706951    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774984396Z" level=warning msg="cleaning up after shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	I0721 23:51:44.706995    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774997396Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788278152Z" level=info msg="shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788393450Z" level=warning msg="cleaning up after shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788444649Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.846647379Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1439]: time="2024-07-21T23:50:38.541510181Z" level=info msg="ignoring event" container=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.544122633Z" level=info msg="shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545450508Z" level=warning msg="cleaning up after shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545830901Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.461769452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.504142282Z" level=info msg="ignoring event" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504338210Z" level=info msg="shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504430323Z" level=warning msg="cleaning up after shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504443725Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.578959353Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579851478Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579966294Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.580111114Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Consumed 5.235s CPU time.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 dockerd[4061]: time="2024-07-21T23:50:44.647231378Z" level=info msg="Starting up"
	I0721 23:51:44.707551    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 dockerd[4061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0721 23:51:44.707551    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0721 23:51:44.707593    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0721 23:51:44.707593    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	I0721 23:51:44.736086    3296 out.go:177] 
	W0721 23:51:44.740389    3296 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 21 23:47:42 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.168118118Z" level=info msg="Starting up"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.169181481Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.170711772Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.204506281Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239101537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239202743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239269947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239286548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239363452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239504161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239689572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239796878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239818179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239829580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240023691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240532022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243523700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243618405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244010128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244130936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244288745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244514558Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274608247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274731654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274757156Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274774157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274806859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275350391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275567104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275667010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275688011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275707112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275721313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275742514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275764116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275780417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275794017Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275807418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275819619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275876422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275890923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275939726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275958027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275970928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275983929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275997230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276018931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276036232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276049233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276066634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276084135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276105336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276119437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276132038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276357651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276454457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276513660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276580764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276655869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276712372Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276762075Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277188900Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277433015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277589224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.278054352Z" level=info msg="containerd successfully booted in 0.074903s"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.247751721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.277834397Z" level=info msg="Loading containers: start."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.441509517Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.655815314Z" level=info msg="Loading containers: done."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676595884Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676745891Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788327964Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788443669Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:47:43 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.978875672Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:14 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980386251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980770345Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980878444Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980936643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:15 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:15 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:15 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.044964117Z" level=info msg="Starting up"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.046051302Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.047547081Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1095
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.077138071Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103738503Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103854902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103894101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103909101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103931301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103942600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104085398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104215897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104236396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104246796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104289796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104467393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108266041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108366439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108599936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108922331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109041730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109088329Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109284326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109335126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109351726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109365825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109378125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109446524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110271513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110431611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110840005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110866105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110891004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110910804Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110947503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110983003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111002703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111019702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111038702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111054502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111096201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111137101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111175900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111189300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111205600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111236299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111251899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111274399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111294599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111330498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111345998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111376797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111394397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111421297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111457096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111535995Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111594594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111638794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111653394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111706593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111722293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111736992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111747992Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112862377Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112947276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113020375Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113041274Z" level=info msg="containerd successfully booted in 0.036788s"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.102172085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.122803299Z" level=info msg="Loading containers: start."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.249728942Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.363421569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.454819504Z" level=info msg="Loading containers: done."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478314979Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478440677Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523349955Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:17 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523496853Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:26 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.403414153Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.404940232Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405762121Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405911219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405963218Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:27 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:27 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:27 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:27 functional-264400 dockerd[1439]: time="2024-07-21T23:48:27.488211040Z" level=info msg="Starting up"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.283164837Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.284334421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1445
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.322546392Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.353969657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354127155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354245353Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354279453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354386052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354424751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354988043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355091642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355116141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355128941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355204740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355558335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359334983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359441882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359612579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359749577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359878975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359993174Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360138772Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360266770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360289170Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360306770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360434168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360490167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360944161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361072859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361207757Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361229957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361245657Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361275356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361389255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361429254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361568652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361594052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361609452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361622451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361656951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361680651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361901447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361999446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362019946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362046645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362061445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362075845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362092245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362111045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362124244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362136944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362154644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362178044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362192043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362211643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362342741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362390341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362406041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362418640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362429040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362444140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362455640Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362742536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362893434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362971133Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362995232Z" level=info msg="containerd successfully booted in 0.041146s"
	Jul 21 23:48:29 functional-264400 dockerd[1439]: time="2024-07-21T23:48:29.329544955Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.660319456Z" level=info msg="Loading containers: start."
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.796232675Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.907798631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.001144539Z" level=info msg="Loading containers: done."
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022589743Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022719941Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067087927Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067159926Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:33 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203705562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203993309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204174339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204501992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275055860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275220587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275259793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275372211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333574371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333646683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333744099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333850816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416645674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416770094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416839505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.417133553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625603538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625875582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625899586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.626009704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776176512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776370643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776546172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.835904420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836147160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836225472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836649541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887079538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887333179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887543914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887899671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.134772975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141087657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141198860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141750876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576099088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576165990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576179490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576332795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.700943823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701110428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701133028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701305233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.251787691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252007895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252034496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252193199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.458949480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459063270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459134864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459296351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733493277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733949139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734221216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734462295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:50:33 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.409481815Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.656026383Z" level=info msg="ignoring event" container=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.657306959Z" level=info msg="shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658560636Z" level=warning msg="cleaning up after shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658678934Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.676709403Z" level=info msg="ignoring event" container=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.677164894Z" level=info msg="shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678209575Z" level=warning msg="cleaning up after shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678304373Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.695081165Z" level=info msg="ignoring event" container=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695302161Z" level=info msg="shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695385859Z" level=warning msg="cleaning up after shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695446458Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.701015856Z" level=info msg="ignoring event" container=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.702594427Z" level=info msg="shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704149698Z" level=warning msg="cleaning up after shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704221897Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.728693847Z" level=info msg="shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729328035Z" level=info msg="ignoring event" container=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729433134Z" level=info msg="ignoring event" container=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.731072903Z" level=warning msg="cleaning up after shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.734341743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.734844834Z" level=info msg="ignoring event" container=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.735006831Z" level=info msg="ignoring event" container=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.743975166Z" level=info msg="shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744093164Z" level=warning msg="cleaning up after shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744205762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.730359917Z" level=info msg="shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751792823Z" level=warning msg="cleaning up after shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751834022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759660178Z" level=info msg="ignoring event" container=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759862574Z" level=info msg="ignoring event" container=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760069570Z" level=info msg="ignoring event" container=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760281966Z" level=info msg="ignoring event" container=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760277567Z" level=info msg="shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760380865Z" level=warning msg="cleaning up after shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760394364Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.748823577Z" level=info msg="shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765443172Z" level=warning msg="cleaning up after shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765461071Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769325900Z" level=info msg="shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769546096Z" level=warning msg="cleaning up after shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769827691Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774921997Z" level=info msg="shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774984396Z" level=warning msg="cleaning up after shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774997396Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788278152Z" level=info msg="shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788393450Z" level=warning msg="cleaning up after shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788444649Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.846647379Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1439]: time="2024-07-21T23:50:38.541510181Z" level=info msg="ignoring event" container=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.544122633Z" level=info msg="shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545450508Z" level=warning msg="cleaning up after shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545830901Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.461769452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.504142282Z" level=info msg="ignoring event" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504338210Z" level=info msg="shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504430323Z" level=warning msg="cleaning up after shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504443725Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.578959353Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579851478Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579966294Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.580111114Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:50:44 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Consumed 5.235s CPU time.
	Jul 21 23:50:44 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:50:44 functional-264400 dockerd[4061]: time="2024-07-21T23:50:44.647231378Z" level=info msg="Starting up"
	Jul 21 23:51:44 functional-264400 dockerd[4061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 21 23:51:44 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 21 23:47:42 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.168118118Z" level=info msg="Starting up"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.169181481Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.170711772Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.204506281Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239101537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239202743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239269947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239286548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239363452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239504161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239689572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239796878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239818179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239829580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240023691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240532022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243523700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243618405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244010128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244130936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244288745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244514558Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274608247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274731654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274757156Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274774157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274806859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275350391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275567104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275667010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275688011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275707112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275721313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275742514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275764116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275780417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275794017Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275807418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275819619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275876422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275890923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275939726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275958027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275970928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275983929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275997230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276018931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276036232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276049233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276066634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276084135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276105336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276119437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276132038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276357651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276454457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276513660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276580764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276655869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276712372Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276762075Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277188900Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277433015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277589224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.278054352Z" level=info msg="containerd successfully booted in 0.074903s"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.247751721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.277834397Z" level=info msg="Loading containers: start."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.441509517Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.655815314Z" level=info msg="Loading containers: done."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676595884Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676745891Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788327964Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788443669Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:47:43 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.978875672Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:14 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980386251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980770345Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980878444Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980936643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:15 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:15 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:15 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.044964117Z" level=info msg="Starting up"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.046051302Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.047547081Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1095
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.077138071Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103738503Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103854902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103894101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103909101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103931301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103942600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104085398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104215897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104236396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104246796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104289796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104467393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108266041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108366439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108599936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108922331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109041730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109088329Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109284326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109335126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109351726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109365825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109378125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109446524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110271513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110431611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110840005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110866105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110891004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110910804Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110947503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110983003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111002703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111019702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111038702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111054502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111096201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111137101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111175900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111189300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111205600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111236299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111251899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111274399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111294599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111330498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111345998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111376797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111394397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111421297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111457096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111535995Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111594594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111638794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111653394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111706593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111722293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111736992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111747992Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112862377Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112947276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113020375Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113041274Z" level=info msg="containerd successfully booted in 0.036788s"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.102172085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.122803299Z" level=info msg="Loading containers: start."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.249728942Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.363421569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.454819504Z" level=info msg="Loading containers: done."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478314979Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478440677Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523349955Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:17 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523496853Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:26 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.403414153Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.404940232Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405762121Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405911219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405963218Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:27 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:27 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:27 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:27 functional-264400 dockerd[1439]: time="2024-07-21T23:48:27.488211040Z" level=info msg="Starting up"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.283164837Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.284334421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1445
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.322546392Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.353969657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354127155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354245353Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354279453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354386052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354424751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354988043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355091642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355116141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355128941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355204740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355558335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359334983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359441882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359612579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359749577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359878975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359993174Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360138772Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360266770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360289170Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360306770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360434168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360490167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360944161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361072859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361207757Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361229957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361245657Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361275356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361389255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361429254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361568652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361594052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361609452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361622451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361656951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361680651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361901447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361999446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362019946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362046645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362061445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362075845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362092245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362111045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362124244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362136944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362154644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362178044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362192043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362211643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362342741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362390341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362406041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362418640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362429040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362444140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362455640Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362742536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362893434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362971133Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362995232Z" level=info msg="containerd successfully booted in 0.041146s"
	Jul 21 23:48:29 functional-264400 dockerd[1439]: time="2024-07-21T23:48:29.329544955Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.660319456Z" level=info msg="Loading containers: start."
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.796232675Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.907798631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.001144539Z" level=info msg="Loading containers: done."
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022589743Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022719941Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067087927Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067159926Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:33 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203705562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203993309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204174339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204501992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275055860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275220587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275259793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275372211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333574371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333646683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333744099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333850816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416645674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416770094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416839505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.417133553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625603538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625875582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625899586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.626009704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776176512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776370643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776546172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.835904420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836147160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836225472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836649541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887079538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887333179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887543914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887899671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.134772975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141087657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141198860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141750876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576099088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576165990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576179490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576332795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.700943823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701110428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701133028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701305233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.251787691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252007895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252034496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252193199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.458949480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459063270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459134864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459296351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733493277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733949139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734221216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734462295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:50:33 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.409481815Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.656026383Z" level=info msg="ignoring event" container=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.657306959Z" level=info msg="shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658560636Z" level=warning msg="cleaning up after shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658678934Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.676709403Z" level=info msg="ignoring event" container=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.677164894Z" level=info msg="shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678209575Z" level=warning msg="cleaning up after shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678304373Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.695081165Z" level=info msg="ignoring event" container=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695302161Z" level=info msg="shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695385859Z" level=warning msg="cleaning up after shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695446458Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.701015856Z" level=info msg="ignoring event" container=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.702594427Z" level=info msg="shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704149698Z" level=warning msg="cleaning up after shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704221897Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.728693847Z" level=info msg="shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729328035Z" level=info msg="ignoring event" container=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729433134Z" level=info msg="ignoring event" container=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.731072903Z" level=warning msg="cleaning up after shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.734341743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.734844834Z" level=info msg="ignoring event" container=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.735006831Z" level=info msg="ignoring event" container=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.743975166Z" level=info msg="shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744093164Z" level=warning msg="cleaning up after shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744205762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.730359917Z" level=info msg="shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751792823Z" level=warning msg="cleaning up after shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751834022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759660178Z" level=info msg="ignoring event" container=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759862574Z" level=info msg="ignoring event" container=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760069570Z" level=info msg="ignoring event" container=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760281966Z" level=info msg="ignoring event" container=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760277567Z" level=info msg="shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760380865Z" level=warning msg="cleaning up after shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760394364Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.748823577Z" level=info msg="shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765443172Z" level=warning msg="cleaning up after shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765461071Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769325900Z" level=info msg="shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769546096Z" level=warning msg="cleaning up after shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769827691Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774921997Z" level=info msg="shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774984396Z" level=warning msg="cleaning up after shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774997396Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788278152Z" level=info msg="shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788393450Z" level=warning msg="cleaning up after shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788444649Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.846647379Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1439]: time="2024-07-21T23:50:38.541510181Z" level=info msg="ignoring event" container=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.544122633Z" level=info msg="shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545450508Z" level=warning msg="cleaning up after shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545830901Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.461769452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.504142282Z" level=info msg="ignoring event" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504338210Z" level=info msg="shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504430323Z" level=warning msg="cleaning up after shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504443725Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.578959353Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579851478Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579966294Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.580111114Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:50:44 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Consumed 5.235s CPU time.
	Jul 21 23:50:44 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:50:44 functional-264400 dockerd[4061]: time="2024-07-21T23:50:44.647231378Z" level=info msg="Starting up"
	Jul 21 23:51:44 functional-264400 dockerd[4061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 21 23:51:44 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0721 23:51:44.740966    3296 out.go:239] * 
	* 
	W0721 23:51:44.742865    3296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 23:51:44.751682    3296 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-264400 --alsologtostderr -v=8": exit status 90
functional_test.go:659: soft start took 2m32.2475333s for "functional-264400" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-264400 -n functional-264400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-264400 -n functional-264400: exit status 2 (12.1713173s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:51:45.431541    5892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 logs -n 25
E0721 23:54:11.860542    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 logs -n 25: (2m48.0863956s)
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| addons  | addons-979300 addons disable                                          | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:36 UTC | 21 Jul 24 23:36 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-979300 addons                                                  | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:36 UTC | 21 Jul 24 23:36 UTC |
	|         | disable volumesnapshots                                               |                   |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                |                   |                   |         |                     |                     |
	| addons  | addons-979300 addons disable                                          | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:36 UTC | 21 Jul 24 23:37 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| addons  | addons-979300 addons disable                                          | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:37 UTC | 21 Jul 24 23:37 UTC |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| stop    | -p addons-979300                                                      | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:37 UTC | 21 Jul 24 23:38 UTC |
	| addons  | enable dashboard -p                                                   | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:38 UTC | 21 Jul 24 23:38 UTC |
	|         | addons-979300                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:38 UTC | 21 Jul 24 23:38 UTC |
	|         | addons-979300                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:38 UTC | 21 Jul 24 23:38 UTC |
	|         | addons-979300                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-979300                                                      | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:38 UTC | 21 Jul 24 23:39 UTC |
	| start   | -p nospam-420400 -n=1 --memory=2250 --wait=false                      | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:39 UTC | 21 Jul 24 23:42 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:42 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:42 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:42 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:43 UTC | 21 Jul 24 23:43 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:43 UTC | 21 Jul 24 23:43 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:43 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-420400                                                      | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	| start   | -p functional-264400                                                  | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:49 UTC |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-264400                                                  | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:49 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:49:13
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:49:13.243734    3296 out.go:291] Setting OutFile to fd 632 ...
	I0721 23:49:13.245089    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:49:13.245089    3296 out.go:304] Setting ErrFile to fd 612...
	I0721 23:49:13.245225    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:49:13.271799    3296 out.go:298] Setting JSON to false
	I0721 23:49:13.274576    3296 start.go:129] hostinfo: {"hostname":"minikube6","uptime":120960,"bootTime":1721484792,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0721 23:49:13.275656    3296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 23:49:13.279846    3296 out.go:177] * [functional-264400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0721 23:49:13.284743    3296 notify.go:220] Checking for updates...
	I0721 23:49:13.286577    3296 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0721 23:49:13.288761    3296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:49:13.292203    3296 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0721 23:49:13.295335    3296 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:49:13.299523    3296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:49:13.304221    3296 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:49:13.304533    3296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:49:18.732833    3296 out.go:177] * Using the hyperv driver based on existing profile
	I0721 23:49:18.737459    3296 start.go:297] selected driver: hyperv
	I0721 23:49:18.737459    3296 start.go:901] validating driver "hyperv" against &{Name:functional-264400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-264400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.97 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:49:18.737459    3296 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:49:18.788011    3296 cni.go:84] Creating CNI manager for ""
	I0721 23:49:18.788078    3296 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 23:49:18.788279    3296 start.go:340] cluster config:
	{Name:functional-264400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-264400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.97 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:49:18.788712    3296 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:49:18.793652    3296 out.go:177] * Starting "functional-264400" primary control-plane node in "functional-264400" cluster
	I0721 23:49:18.796373    3296 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 23:49:18.796558    3296 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 23:49:18.796558    3296 cache.go:56] Caching tarball of preloaded images
	I0721 23:49:18.796558    3296 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 23:49:18.796558    3296 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 23:49:18.797352    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\config.json ...
	I0721 23:49:18.798980    3296 start.go:360] acquireMachinesLock for functional-264400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:49:18.798980    3296 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-264400"
	I0721 23:49:18.798980    3296 start.go:96] Skipping create...Using existing machine configuration
	I0721 23:49:18.799979    3296 fix.go:54] fixHost starting: 
	I0721 23:49:18.799979    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:21.623488    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:21.623488    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:21.623488    3296 fix.go:112] recreateIfNeeded on functional-264400: state=Running err=<nil>
	W0721 23:49:21.623488    3296 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 23:49:21.629305    3296 out.go:177] * Updating the running hyperv "functional-264400" VM ...
	I0721 23:49:21.631533    3296 machine.go:94] provisionDockerMachine start ...
	I0721 23:49:21.631533    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:23.852288    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:23.852288    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:23.852522    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:26.470028    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:26.470028    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:26.478442    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:26.479167    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:26.479167    3296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0721 23:49:26.622339    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-264400
	
	I0721 23:49:26.622466    3296 buildroot.go:166] provisioning hostname "functional-264400"
	I0721 23:49:26.622607    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:28.824177    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:28.824557    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:28.824557    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:31.414467    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:31.414467    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:31.420319    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:31.421099    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:31.421099    3296 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-264400 && echo "functional-264400" | sudo tee /etc/hostname
	I0721 23:49:31.588774    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-264400
	
	I0721 23:49:31.588774    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:33.790066    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:33.790474    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:33.790474    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:36.394275    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:36.394275    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:36.399837    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:36.400299    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:36.400299    3296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-264400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-264400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-264400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:49:36.533255    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:49:36.533255    3296 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0721 23:49:36.533255    3296 buildroot.go:174] setting up certificates
	I0721 23:49:36.533255    3296 provision.go:84] configureAuth start
	I0721 23:49:36.533977    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:38.735744    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:38.735744    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:38.736834    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:41.319431    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:41.319431    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:41.319569    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:46.052701    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:46.052760    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:46.052760    3296 provision.go:143] copyHostCerts
	I0721 23:49:46.052760    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0721 23:49:46.053530    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0721 23:49:46.053530    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0721 23:49:46.054196    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0721 23:49:46.055555    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0721 23:49:46.055555    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0721 23:49:46.055555    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0721 23:49:46.056169    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0721 23:49:46.056925    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0721 23:49:46.057723    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0721 23:49:46.057723    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0721 23:49:46.057723    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0721 23:49:46.059166    3296 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-264400 san=[127.0.0.1 172.28.193.97 functional-264400 localhost minikube]
	I0721 23:49:46.255062    3296 provision.go:177] copyRemoteCerts
	I0721 23:49:46.265961    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:49:46.265961    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:48.459327    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:48.459802    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:48.459881    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:51.076501    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:51.076501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:51.078136    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:49:51.186062    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.919744s)
	I0721 23:49:51.186137    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0721 23:49:51.186285    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0721 23:49:51.234406    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0721 23:49:51.234628    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0721 23:49:51.286824    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0721 23:49:51.286998    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0721 23:49:51.338595    3296 provision.go:87] duration metric: took 14.8051233s to configureAuth
	I0721 23:49:51.338595    3296 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:49:51.339337    3296 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:49:51.339480    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:53.533831    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:53.534329    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:53.534329    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:56.137211    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:56.137352    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:56.143046    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:56.143046    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:56.143046    3296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0721 23:49:56.285359    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0721 23:49:56.285421    3296 buildroot.go:70] root file system type: tmpfs
	I0721 23:49:56.285723    3296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0721 23:49:56.285723    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:58.466808    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:58.466808    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:58.467788    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:01.029845    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:01.030501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:01.036243    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:01.036485    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:01.036485    3296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0721 23:50:01.200267    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0721 23:50:01.200414    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:06.025859    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:06.025859    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:06.031850    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:06.032255    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:06.032255    3296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0721 23:50:06.194379    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:50:06.194379    3296 machine.go:97] duration metric: took 44.5622996s to provisionDockerMachine
	I0721 23:50:06.194379    3296 start.go:293] postStartSetup for "functional-264400" (driver="hyperv")
	I0721 23:50:06.194379    3296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:50:06.209650    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:50:06.209650    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:08.393053    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:08.393053    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:08.393698    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:10.989526    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:10.989526    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:10.989613    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:11.100095    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8903841s)
	I0721 23:50:11.113697    3296 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:50:11.120917    3296 command_runner.go:130] > NAME=Buildroot
	I0721 23:50:11.120917    3296 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0721 23:50:11.120917    3296 command_runner.go:130] > ID=buildroot
	I0721 23:50:11.120917    3296 command_runner.go:130] > VERSION_ID=2023.02.9
	I0721 23:50:11.120999    3296 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0721 23:50:11.121050    3296 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:50:11.121050    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0721 23:50:11.121510    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0721 23:50:11.122518    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0721 23:50:11.122575    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0721 23:50:11.123543    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts -> hosts in /etc/test/nested/copy/5100
	I0721 23:50:11.123618    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts -> /etc/test/nested/copy/5100/hosts
	I0721 23:50:11.133586    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/5100
	I0721 23:50:11.152687    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0721 23:50:11.202971    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts --> /etc/test/nested/copy/5100/hosts (40 bytes)
	I0721 23:50:11.255289    3296 start.go:296] duration metric: took 5.0608472s for postStartSetup
	I0721 23:50:11.255289    3296 fix.go:56] duration metric: took 52.4546661s for fixHost
	I0721 23:50:11.255289    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:13.434592    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:13.434592    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:13.435305    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:16.055310    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:16.055310    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:16.061461    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:16.061461    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:16.061461    3296 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:50:16.203294    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605816.220500350
	
	I0721 23:50:16.203389    3296 fix.go:216] guest clock: 1721605816.220500350
	I0721 23:50:16.203389    3296 fix.go:229] Guest: 2024-07-21 23:50:16.22050035 +0000 UTC Remote: 2024-07-21 23:50:11.2552893 +0000 UTC m=+58.166615301 (delta=4.96521105s)
	I0721 23:50:16.203490    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:18.378670    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:18.378670    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:18.378758    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:21.010405    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:21.010405    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:21.016091    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:21.016289    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:21.016289    3296 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721605816
	I0721 23:50:21.170845    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Jul 21 23:50:16 UTC 2024
	
	I0721 23:50:21.171182    3296 fix.go:236] clock set: Sun Jul 21 23:50:16 UTC 2024
	 (err=<nil>)
	I0721 23:50:21.171182    3296 start.go:83] releasing machines lock for "functional-264400", held for 1m2.3714351s
	I0721 23:50:21.171265    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:23.395806    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:23.395850    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:23.395850    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:26.024178    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:26.024178    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:26.028577    3296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0721 23:50:26.028739    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:26.043523    3296 ssh_runner.go:195] Run: cat /version.json
	I0721 23:50:26.043523    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:28.404030    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:28.404030    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:31.161323    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:31.161323    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:31.162685    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:31.218457    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:31.219166    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:31.219224    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:31.264926    3296 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0721 23:50:31.265653    3296 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.236833s)
	W0721 23:50:31.265743    3296 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0721 23:50:31.313118    3296 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0721 23:50:31.313718    3296 ssh_runner.go:235] Completed: cat /version.json: (5.2701285s)
	I0721 23:50:31.326430    3296 ssh_runner.go:195] Run: systemctl --version
	I0721 23:50:31.335559    3296 command_runner.go:130] > systemd 252 (252)
	I0721 23:50:31.335630    3296 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0721 23:50:31.347271    3296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0721 23:50:31.356110    3296 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0721 23:50:31.356110    3296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:50:31.367122    3296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0721 23:50:31.377018    3296 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0721 23:50:31.377190    3296 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0721 23:50:31.390830    3296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0721 23:50:31.390919    3296 start.go:495] detecting cgroup driver to use...
	I0721 23:50:31.391177    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:50:31.430605    3296 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0721 23:50:31.443163    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0721 23:50:31.473200    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0721 23:50:31.495064    3296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0721 23:50:31.505345    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0721 23:50:31.537330    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:50:31.570237    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0721 23:50:31.603641    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:50:31.634289    3296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:50:31.667749    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0721 23:50:31.699347    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0721 23:50:31.728970    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0721 23:50:31.758020    3296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:50:31.777872    3296 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0721 23:50:31.788667    3296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:50:31.817394    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:50:32.095962    3296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0721 23:50:32.129994    3296 start.go:495] detecting cgroup driver to use...
	I0721 23:50:32.144084    3296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0721 23:50:32.171240    3296 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0721 23:50:32.171513    3296 command_runner.go:130] > [Unit]
	I0721 23:50:32.171513    3296 command_runner.go:130] > Description=Docker Application Container Engine
	I0721 23:50:32.171513    3296 command_runner.go:130] > Documentation=https://docs.docker.com
	I0721 23:50:32.171513    3296 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0721 23:50:32.171513    3296 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0721 23:50:32.171626    3296 command_runner.go:130] > StartLimitBurst=3
	I0721 23:50:32.171626    3296 command_runner.go:130] > StartLimitIntervalSec=60
	I0721 23:50:32.171626    3296 command_runner.go:130] > [Service]
	I0721 23:50:32.171626    3296 command_runner.go:130] > Type=notify
	I0721 23:50:32.171626    3296 command_runner.go:130] > Restart=on-failure
	I0721 23:50:32.171626    3296 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0721 23:50:32.171697    3296 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0721 23:50:32.171697    3296 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0721 23:50:32.171697    3296 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0721 23:50:32.171697    3296 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0721 23:50:32.171697    3296 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0721 23:50:32.171763    3296 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0721 23:50:32.171763    3296 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0721 23:50:32.171763    3296 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0721 23:50:32.171763    3296 command_runner.go:130] > ExecStart=
	I0721 23:50:32.171845    3296 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0721 23:50:32.171845    3296 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0721 23:50:32.171845    3296 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0721 23:50:32.171910    3296 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0721 23:50:32.171935    3296 command_runner.go:130] > LimitNOFILE=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > LimitNPROC=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > LimitCORE=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0721 23:50:32.171968    3296 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0721 23:50:32.171968    3296 command_runner.go:130] > TasksMax=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > TimeoutStartSec=0
	I0721 23:50:32.171968    3296 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0721 23:50:32.171968    3296 command_runner.go:130] > Delegate=yes
	I0721 23:50:32.171968    3296 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0721 23:50:32.171968    3296 command_runner.go:130] > KillMode=process
	I0721 23:50:32.171968    3296 command_runner.go:130] > [Install]
	I0721 23:50:32.171968    3296 command_runner.go:130] > WantedBy=multi-user.target
	I0721 23:50:32.185323    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:50:32.222308    3296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:50:32.269127    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:50:32.308679    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 23:50:32.337720    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:50:32.373754    3296 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0721 23:50:32.387559    3296 ssh_runner.go:195] Run: which cri-dockerd
	I0721 23:50:32.393567    3296 command_runner.go:130] > /usr/bin/cri-dockerd
	I0721 23:50:32.407091    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0721 23:50:32.429103    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0721 23:50:32.473119    3296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0721 23:50:32.747171    3296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0721 23:50:32.998956    3296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0721 23:50:32.999296    3296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0721 23:50:33.051719    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:50:33.356719    3296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 23:51:44.652209    3296 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0721 23:51:44.652633    3296 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0721 23:51:44.654236    3296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.2965818s)
	I0721 23:51:44.666167    3296 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.168118118Z" level=info msg="Starting up"
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.169181481Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.170711772Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.204506281Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239101537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239202743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239269947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239286548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239363452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239504161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239689572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239796878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239818179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239829580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240023691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240532022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243523700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243618405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244010128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244130936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244288745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244514558Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274608247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274731654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274757156Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274774157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274806859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.698341    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275350391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.698341    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275567104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275667010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275688011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275707112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275721313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275742514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275764116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275780417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275794017Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275807418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698670    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275819619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275876422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275890923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698817    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275939726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698817    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275958027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275970928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275983929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275997230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276018931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276036232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276049233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276066634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276084135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276105336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699059    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276119437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699059    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276132038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276357651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276454457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276513660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699204    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276580764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.699204    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276655869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699260    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276712372Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.699260    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276762075Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277188900Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277433015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277589224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.278054352Z" level=info msg="containerd successfully booted in 0.074903s"
	I0721 23:51:44.699388    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.247751721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.277834397Z" level=info msg="Loading containers: start."
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.441509517Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.655815314Z" level=info msg="Loading containers: done."
	I0721 23:51:44.699472    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676595884Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.699498    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676745891Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.699498    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788327964Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788443669Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.978875672Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980386251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980770345Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980878444Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980936643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.044964117Z" level=info msg="Starting up"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.046051302Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.047547081Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1095
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.077138071Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103738503Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103854902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103894101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103909101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103931301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103942600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104085398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104215897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104236396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104246796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104289796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104467393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108266041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108366439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108599936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108922331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109041730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109088329Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109284326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109335126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109351726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109365825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109378125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109446524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110271513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110431611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110840005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110866105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110891004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110910804Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110947503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700871    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110983003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700871    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111002703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111019702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111038702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111054502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111096201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111137101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111175900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111189300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111205600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111236299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111251899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111274399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111294599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111330498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111345998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111376797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111394397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111421297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111457096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111535995Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111594594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111638794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111653394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111706593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111722293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111736992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111747992Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112862377Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112947276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113020375Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113041274Z" level=info msg="containerd successfully booted in 0.036788s"
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.102172085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.122803299Z" level=info msg="Loading containers: start."
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.249728942Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.363421569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.454819504Z" level=info msg="Loading containers: done."
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478314979Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478440677Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523349955Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.701834    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523496853Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.403414153Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.404940232Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405762121Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405911219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405963218Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.702027    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 dockerd[1439]: time="2024-07-21T23:48:27.488211040Z" level=info msg="Starting up"
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.283164837Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.284334421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1445
	I0721 23:51:44.702206    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.322546392Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.353969657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354127155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354245353Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.702289    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354279453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354386052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354424751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354988043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355091642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355116141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355128941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355204740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355558335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359334983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359441882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359612579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359749577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359878975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359993174Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360138772Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360266770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.702688    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360289170Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360306770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360434168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360490167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360944161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361072859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361207757Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361229957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361245657Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361275356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361389255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361429254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361568652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361594052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361609452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361622451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361656951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361680651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361901447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361999446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362019946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362046645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362061445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362075845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362092245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362111045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362124244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362136944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362154644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362178044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362192043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362211643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362342741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362390341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362406041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362418640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362429040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362444140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.703684    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362455640Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.703715    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362742536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.703715    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362893434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362971133Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362995232Z" level=info msg="containerd successfully booted in 0.041146s"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:29 functional-264400 dockerd[1439]: time="2024-07-21T23:48:29.329544955Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.660319456Z" level=info msg="Loading containers: start."
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.796232675Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.907798631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.001144539Z" level=info msg="Loading containers: done."
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022589743Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022719941Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067087927Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067159926Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203705562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203993309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204174339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204501992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275055860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275220587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275259793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275372211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333574371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333646683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333744099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333850816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416645674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416770094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416839505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.417133553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625603538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625875582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704442    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625899586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704622    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.626009704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704779    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776176512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776370643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776546172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.835904420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836147160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836225472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836649541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887079538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887333179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887543914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887899671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.134772975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141087657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141198860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141750876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576099088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576165990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576179490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576332795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.700943823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701110428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701133028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705465    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701305233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705465    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.251787691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705516    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252007895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705516    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252034496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252193199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.458949480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459063270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459134864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459296351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733493277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733949139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734221216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734462295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.409481815Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.656026383Z" level=info msg="ignoring event" container=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.657306959Z" level=info msg="shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658560636Z" level=warning msg="cleaning up after shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658678934Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.676709403Z" level=info msg="ignoring event" container=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.677164894Z" level=info msg="shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678209575Z" level=warning msg="cleaning up after shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678304373Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.695081165Z" level=info msg="ignoring event" container=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695302161Z" level=info msg="shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695385859Z" level=warning msg="cleaning up after shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695446458Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.701015856Z" level=info msg="ignoring event" container=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.702594427Z" level=info msg="shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704149698Z" level=warning msg="cleaning up after shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	I0721 23:51:44.706258    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704221897Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.728693847Z" level=info msg="shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729328035Z" level=info msg="ignoring event" container=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729433134Z" level=info msg="ignoring event" container=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.731072903Z" level=warning msg="cleaning up after shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.734341743Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.734844834Z" level=info msg="ignoring event" container=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.735006831Z" level=info msg="ignoring event" container=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.743975166Z" level=info msg="shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744093164Z" level=warning msg="cleaning up after shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744205762Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.730359917Z" level=info msg="shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751792823Z" level=warning msg="cleaning up after shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751834022Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759660178Z" level=info msg="ignoring event" container=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759862574Z" level=info msg="ignoring event" container=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760069570Z" level=info msg="ignoring event" container=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760281966Z" level=info msg="ignoring event" container=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760277567Z" level=info msg="shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760380865Z" level=warning msg="cleaning up after shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760394364Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.748823577Z" level=info msg="shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765443172Z" level=warning msg="cleaning up after shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765461071Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769325900Z" level=info msg="shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769546096Z" level=warning msg="cleaning up after shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769827691Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706951    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774921997Z" level=info msg="shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	I0721 23:51:44.706951    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774984396Z" level=warning msg="cleaning up after shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	I0721 23:51:44.706995    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774997396Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788278152Z" level=info msg="shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788393450Z" level=warning msg="cleaning up after shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788444649Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.846647379Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1439]: time="2024-07-21T23:50:38.541510181Z" level=info msg="ignoring event" container=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.544122633Z" level=info msg="shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545450508Z" level=warning msg="cleaning up after shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545830901Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.461769452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.504142282Z" level=info msg="ignoring event" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504338210Z" level=info msg="shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504430323Z" level=warning msg="cleaning up after shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504443725Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.578959353Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579851478Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579966294Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.580111114Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Consumed 5.235s CPU time.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 dockerd[4061]: time="2024-07-21T23:50:44.647231378Z" level=info msg="Starting up"
	I0721 23:51:44.707551    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 dockerd[4061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0721 23:51:44.707551    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0721 23:51:44.707593    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0721 23:51:44.707593    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	I0721 23:51:44.736086    3296 out.go:177] 
	W0721 23:51:44.740389    3296 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 21 23:47:42 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.168118118Z" level=info msg="Starting up"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.169181481Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.170711772Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.204506281Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239101537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239202743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239269947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239286548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239363452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239504161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239689572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239796878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239818179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239829580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240023691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240532022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243523700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243618405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244010128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244130936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244288745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244514558Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274608247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274731654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274757156Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274774157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274806859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275350391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275567104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275667010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275688011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275707112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275721313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275742514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275764116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275780417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275794017Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275807418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275819619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275876422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275890923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275939726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275958027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275970928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275983929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275997230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276018931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276036232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276049233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276066634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276084135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276105336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276119437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276132038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276357651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276454457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276513660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276580764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276655869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276712372Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276762075Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277188900Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277433015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277589224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.278054352Z" level=info msg="containerd successfully booted in 0.074903s"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.247751721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.277834397Z" level=info msg="Loading containers: start."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.441509517Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.655815314Z" level=info msg="Loading containers: done."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676595884Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676745891Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788327964Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788443669Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:47:43 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.978875672Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:14 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980386251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980770345Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980878444Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980936643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:15 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:15 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:15 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.044964117Z" level=info msg="Starting up"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.046051302Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.047547081Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1095
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.077138071Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103738503Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103854902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103894101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103909101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103931301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103942600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104085398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104215897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104236396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104246796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104289796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104467393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108266041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108366439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108599936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108922331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109041730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109088329Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109284326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109335126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109351726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109365825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109378125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109446524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110271513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110431611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110840005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110866105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110891004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110910804Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110947503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110983003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111002703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111019702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111038702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111054502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111096201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111137101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111175900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111189300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111205600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111236299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111251899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111274399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111294599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111330498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111345998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111376797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111394397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111421297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111457096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111535995Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111594594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111638794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111653394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111706593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111722293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111736992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111747992Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112862377Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112947276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113020375Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113041274Z" level=info msg="containerd successfully booted in 0.036788s"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.102172085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.122803299Z" level=info msg="Loading containers: start."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.249728942Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.363421569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.454819504Z" level=info msg="Loading containers: done."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478314979Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478440677Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523349955Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:17 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523496853Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:26 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.403414153Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.404940232Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405762121Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405911219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405963218Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:27 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:27 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:27 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:27 functional-264400 dockerd[1439]: time="2024-07-21T23:48:27.488211040Z" level=info msg="Starting up"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.283164837Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.284334421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1445
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.322546392Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.353969657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354127155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354245353Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354279453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354386052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354424751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354988043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355091642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355116141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355128941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355204740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355558335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359334983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359441882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359612579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359749577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359878975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359993174Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360138772Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360266770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360289170Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360306770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360434168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360490167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360944161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361072859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361207757Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361229957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361245657Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361275356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361389255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361429254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361568652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361594052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361609452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361622451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361656951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361680651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361901447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361999446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362019946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362046645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362061445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362075845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362092245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362111045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362124244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362136944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362154644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362178044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362192043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362211643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362342741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362390341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362406041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362418640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362429040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362444140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362455640Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362742536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362893434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362971133Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362995232Z" level=info msg="containerd successfully booted in 0.041146s"
	Jul 21 23:48:29 functional-264400 dockerd[1439]: time="2024-07-21T23:48:29.329544955Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.660319456Z" level=info msg="Loading containers: start."
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.796232675Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.907798631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.001144539Z" level=info msg="Loading containers: done."
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022589743Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022719941Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067087927Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067159926Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:33 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203705562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203993309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204174339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204501992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275055860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275220587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275259793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275372211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333574371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333646683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333744099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333850816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416645674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416770094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416839505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.417133553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625603538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625875582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625899586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.626009704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776176512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776370643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776546172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.835904420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836147160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836225472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836649541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887079538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887333179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887543914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887899671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.134772975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141087657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141198860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141750876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576099088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576165990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576179490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576332795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.700943823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701110428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701133028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701305233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.251787691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252007895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252034496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252193199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.458949480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459063270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459134864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459296351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733493277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733949139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734221216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734462295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:50:33 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.409481815Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.656026383Z" level=info msg="ignoring event" container=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.657306959Z" level=info msg="shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658560636Z" level=warning msg="cleaning up after shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658678934Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.676709403Z" level=info msg="ignoring event" container=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.677164894Z" level=info msg="shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678209575Z" level=warning msg="cleaning up after shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678304373Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.695081165Z" level=info msg="ignoring event" container=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695302161Z" level=info msg="shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695385859Z" level=warning msg="cleaning up after shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695446458Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.701015856Z" level=info msg="ignoring event" container=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.702594427Z" level=info msg="shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704149698Z" level=warning msg="cleaning up after shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704221897Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.728693847Z" level=info msg="shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729328035Z" level=info msg="ignoring event" container=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729433134Z" level=info msg="ignoring event" container=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.731072903Z" level=warning msg="cleaning up after shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.734341743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.734844834Z" level=info msg="ignoring event" container=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.735006831Z" level=info msg="ignoring event" container=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.743975166Z" level=info msg="shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744093164Z" level=warning msg="cleaning up after shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744205762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.730359917Z" level=info msg="shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751792823Z" level=warning msg="cleaning up after shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751834022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759660178Z" level=info msg="ignoring event" container=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759862574Z" level=info msg="ignoring event" container=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760069570Z" level=info msg="ignoring event" container=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760281966Z" level=info msg="ignoring event" container=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760277567Z" level=info msg="shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760380865Z" level=warning msg="cleaning up after shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760394364Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.748823577Z" level=info msg="shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765443172Z" level=warning msg="cleaning up after shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765461071Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769325900Z" level=info msg="shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769546096Z" level=warning msg="cleaning up after shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769827691Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774921997Z" level=info msg="shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774984396Z" level=warning msg="cleaning up after shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774997396Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788278152Z" level=info msg="shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788393450Z" level=warning msg="cleaning up after shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788444649Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.846647379Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1439]: time="2024-07-21T23:50:38.541510181Z" level=info msg="ignoring event" container=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.544122633Z" level=info msg="shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545450508Z" level=warning msg="cleaning up after shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545830901Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.461769452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.504142282Z" level=info msg="ignoring event" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504338210Z" level=info msg="shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504430323Z" level=warning msg="cleaning up after shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504443725Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.578959353Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579851478Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579966294Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.580111114Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:50:44 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Consumed 5.235s CPU time.
	Jul 21 23:50:44 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:50:44 functional-264400 dockerd[4061]: time="2024-07-21T23:50:44.647231378Z" level=info msg="Starting up"
	Jul 21 23:51:44 functional-264400 dockerd[4061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 21 23:51:44 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0721 23:51:44.740966    3296 out.go:239] * 
	W0721 23:51:44.742865    3296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 23:51:44.751682    3296 out.go:177] 
	
	
	==> Docker <==
	Jul 21 23:52:45 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:52:45 functional-264400 dockerd[4553]: time="2024-07-21T23:52:45.074328143Z" level=info msg="Starting up"
	Jul 21 23:53:45 functional-264400 dockerd[4553]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="error getting RW layer size for container ID '74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559'"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="error getting RW layer size for container ID 'fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab'"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="error getting RW layer size for container ID '46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd'"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="error getting RW layer size for container ID 'd3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:53:45 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf'"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="error getting RW layer size for container ID 'c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5'"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="error getting RW layer size for container ID '62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789'"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 21 23:53:45 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="error getting RW layer size for container ID '6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:53:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:53:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084'"
	Jul 21 23:53:45 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 21 23:53:45 functional-264400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jul 21 23:53:45 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:53:45 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-21T23:53:47Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul21 23:48] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.098748] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.521494] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.200309] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.246957] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +2.856365] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.199871] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.217213] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.265319] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[  +7.860794] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[  +0.119892] kauditd_printk_skb: 202 callbacks suppressed
	[  +6.328518] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.744596] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.374282] systemd-fstab-generator[1877]: Ignoring "noauto" option for root device
	[  +0.101703] kauditd_printk_skb: 48 callbacks suppressed
	[  +8.037046] systemd-fstab-generator[2275]: Ignoring "noauto" option for root device
	[  +0.135108] kauditd_printk_skb: 62 callbacks suppressed
	[Jul21 23:49] systemd-fstab-generator[2503]: Ignoring "noauto" option for root device
	[  +0.181805] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.421058] kauditd_printk_skb: 71 callbacks suppressed
	[Jul21 23:50] systemd-fstab-generator[3581]: Ignoring "noauto" option for root device
	[  +0.640674] systemd-fstab-generator[3616]: Ignoring "noauto" option for root device
	[  +0.278009] systemd-fstab-generator[3628]: Ignoring "noauto" option for root device
	[  +0.318270] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[  +5.355152] kauditd_printk_skb: 91 callbacks suppressed
	
	
	==> kernel <==
	 23:54:45 up 8 min,  0 users,  load average: 0.01, 0.16, 0.10
	Linux functional-264400 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 21 23:54:36 functional-264400 kubelet[2282]: E0721 23:54:36.272383    2282 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 21 23:54:37 functional-264400 kubelet[2282]: E0721 23:54:37.599700    2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused" interval="7s"
	Jul 21 23:54:38 functional-264400 kubelet[2282]: E0721 23:54:38.193950    2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-264400.17e45f62791f0602\": dial tcp 172.28.193.97:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-264400.17e45f62791f0602  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-264400,UID:d4a646c87acc77b79c334272b81f6958,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.28.193.97:8441/readyz\": dial tcp 172.28.193.97:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-264400,},FirstTimestamp:2024-07-21 23:50:34.105882114 +0000 UTC m=+105.975509982,LastTimestam
p:2024-07-21 23:50:35.105962231 +0000 UTC m=+106.975590099,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-264400,}"
	Jul 21 23:54:38 functional-264400 kubelet[2282]: I0721 23:54:38.334093    2282 status_manager.go:853] "Failed to get status for pod" podUID="d4a646c87acc77b79c334272b81f6958" pod="kube-system/kube-apiserver-functional-264400" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-264400\": dial tcp 172.28.193.97:8441: connect: connection refused"
	Jul 21 23:54:41 functional-264400 kubelet[2282]: E0721 23:54:41.006889    2282 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m8.33516923s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 21 23:54:44 functional-264400 kubelet[2282]: E0721 23:54:44.601842    2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused" interval="7s"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.334865    2282 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.335042    2282 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.335491    2282 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.336168    2282 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.336518    2282 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: I0721 23:54:45.336655    2282 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.336695    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.336776    2282 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.336869    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.337091    2282 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.336536    2282 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.337644    2282 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: I0721 23:54:45.338045    2282 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.340310    2282 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.340449    2282 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.340470    2282 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.340562    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.340639    2282 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 21 23:54:45 functional-264400 kubelet[2282]: E0721 23:54:45.341261    2282 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:51:57.593561    7244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0721 23:52:44.900114    7244 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:52:44.936055    7244 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:52:44.967265    7244 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:52:44.997466    7244 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:53:45.094242    7244 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:53:45.128706    7244 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:53:45.161211    7244 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:53:45.200172    7244 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-264400 -n functional-264400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-264400 -n functional-264400: exit status 2 (12.1776484s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:54:46.149948    5348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-264400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (345.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (180.56s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-264400 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-264400 get po -A: exit status 1 (10.348325s)

                                                
                                                
** stderr ** 
	E0721 23:55:00.482603   12848 memcache.go:265] couldn't get current server API group list: Get "https://172.28.193.97:8441/api?timeout=32s": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.
	E0721 23:55:02.586535   12848 memcache.go:265] couldn't get current server API group list: Get "https://172.28.193.97:8441/api?timeout=32s": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.
	E0721 23:55:04.615202   12848 memcache.go:265] couldn't get current server API group list: Get "https://172.28.193.97:8441/api?timeout=32s": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.
	E0721 23:55:06.659532   12848 memcache.go:265] couldn't get current server API group list: Get "https://172.28.193.97:8441/api?timeout=32s": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.
	E0721 23:55:08.691033   12848 memcache.go:265] couldn't get current server API group list: Get "https://172.28.193.97:8441/api?timeout=32s": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-264400 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"E0721 23:55:00.482603   12848 memcache.go:265] couldn't get current server API group list: Get \"https://172.28.193.97:8441/api?timeout=32s\": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.\nE0721 23:55:02.586535   12848 memcache.go:265] couldn't get current server API group list: Get \"https://172.28.193.97:8441/api?timeout=32s\": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.\nE0721 23:55:04.615202   12848 memcache.go:265] couldn't get current server API group list: Get \"https://172.28.193.97:8441/api?timeout=32s\": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.\nE0721 23:55:06.659532   12848 memcache.go:265] couldn't get current server API group list: Get \"https://172.28.193.97:8441/api?timeout=32s\": dial tcp 172.28.193.97:8441: connec
tex: No connection could be made because the target machine actively refused it.\nE0721 23:55:08.691033   12848 memcache.go:265] couldn't get current server API group list: Get \"https://172.28.193.97:8441/api?timeout=32s\": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.\nUnable to connect to the server: dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.\n"*: args "kubectl --context functional-264400 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-264400 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-264400 -n functional-264400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-264400 -n functional-264400: exit status 2 (12.0275349s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:55:08.798039    6708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 logs -n 25: (2m25.5742339s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| addons  | addons-979300 addons disable                                          | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:36 UTC | 21 Jul 24 23:36 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-979300 addons                                                  | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:36 UTC | 21 Jul 24 23:36 UTC |
	|         | disable volumesnapshots                                               |                   |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                |                   |                   |         |                     |                     |
	| addons  | addons-979300 addons disable                                          | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:36 UTC | 21 Jul 24 23:37 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| addons  | addons-979300 addons disable                                          | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:37 UTC | 21 Jul 24 23:37 UTC |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| stop    | -p addons-979300                                                      | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:37 UTC | 21 Jul 24 23:38 UTC |
	| addons  | enable dashboard -p                                                   | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:38 UTC | 21 Jul 24 23:38 UTC |
	|         | addons-979300                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:38 UTC | 21 Jul 24 23:38 UTC |
	|         | addons-979300                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:38 UTC | 21 Jul 24 23:38 UTC |
	|         | addons-979300                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-979300                                                      | addons-979300     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:38 UTC | 21 Jul 24 23:39 UTC |
	| start   | -p nospam-420400 -n=1 --memory=2250 --wait=false                      | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:39 UTC | 21 Jul 24 23:42 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:42 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:42 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:42 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:43 UTC | 21 Jul 24 23:43 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:43 UTC | 21 Jul 24 23:43 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:43 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                               | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-420400                                                      | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	| start   | -p functional-264400                                                  | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:49 UTC |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-264400                                                  | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:49 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:49:13
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:49:13.243734    3296 out.go:291] Setting OutFile to fd 632 ...
	I0721 23:49:13.245089    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:49:13.245089    3296 out.go:304] Setting ErrFile to fd 612...
	I0721 23:49:13.245225    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:49:13.271799    3296 out.go:298] Setting JSON to false
	I0721 23:49:13.274576    3296 start.go:129] hostinfo: {"hostname":"minikube6","uptime":120960,"bootTime":1721484792,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0721 23:49:13.275656    3296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 23:49:13.279846    3296 out.go:177] * [functional-264400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0721 23:49:13.284743    3296 notify.go:220] Checking for updates...
	I0721 23:49:13.286577    3296 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0721 23:49:13.288761    3296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:49:13.292203    3296 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0721 23:49:13.295335    3296 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:49:13.299523    3296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:49:13.304221    3296 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:49:13.304533    3296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:49:18.732833    3296 out.go:177] * Using the hyperv driver based on existing profile
	I0721 23:49:18.737459    3296 start.go:297] selected driver: hyperv
	I0721 23:49:18.737459    3296 start.go:901] validating driver "hyperv" against &{Name:functional-264400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-264400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.97 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:49:18.737459    3296 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:49:18.788011    3296 cni.go:84] Creating CNI manager for ""
	I0721 23:49:18.788078    3296 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 23:49:18.788279    3296 start.go:340] cluster config:
	{Name:functional-264400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-264400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.97 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:49:18.788712    3296 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:49:18.793652    3296 out.go:177] * Starting "functional-264400" primary control-plane node in "functional-264400" cluster
	I0721 23:49:18.796373    3296 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 23:49:18.796558    3296 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 23:49:18.796558    3296 cache.go:56] Caching tarball of preloaded images
	I0721 23:49:18.796558    3296 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 23:49:18.796558    3296 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 23:49:18.797352    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\config.json ...
	I0721 23:49:18.798980    3296 start.go:360] acquireMachinesLock for functional-264400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:49:18.798980    3296 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-264400"
	I0721 23:49:18.798980    3296 start.go:96] Skipping create...Using existing machine configuration
	I0721 23:49:18.799979    3296 fix.go:54] fixHost starting: 
	I0721 23:49:18.799979    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:21.623488    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:21.623488    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:21.623488    3296 fix.go:112] recreateIfNeeded on functional-264400: state=Running err=<nil>
	W0721 23:49:21.623488    3296 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 23:49:21.629305    3296 out.go:177] * Updating the running hyperv "functional-264400" VM ...
	I0721 23:49:21.631533    3296 machine.go:94] provisionDockerMachine start ...
	I0721 23:49:21.631533    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:23.852288    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:23.852288    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:23.852522    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:26.470028    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:26.470028    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:26.478442    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:26.479167    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:26.479167    3296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0721 23:49:26.622339    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-264400
	
	I0721 23:49:26.622466    3296 buildroot.go:166] provisioning hostname "functional-264400"
	I0721 23:49:26.622607    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:28.824177    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:28.824557    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:28.824557    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:31.414467    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:31.414467    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:31.420319    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:31.421099    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:31.421099    3296 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-264400 && echo "functional-264400" | sudo tee /etc/hostname
	I0721 23:49:31.588774    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-264400
	
	I0721 23:49:31.588774    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:33.790066    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:33.790474    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:33.790474    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:36.394275    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:36.394275    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:36.399837    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:36.400299    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:36.400299    3296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-264400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-264400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-264400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:49:36.533255    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:49:36.533255    3296 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0721 23:49:36.533255    3296 buildroot.go:174] setting up certificates
	I0721 23:49:36.533255    3296 provision.go:84] configureAuth start
	I0721 23:49:36.533977    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:38.735744    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:38.735744    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:38.736834    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:41.319431    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:41.319431    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:41.319569    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:46.052701    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:46.052760    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:46.052760    3296 provision.go:143] copyHostCerts
	I0721 23:49:46.052760    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0721 23:49:46.053530    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0721 23:49:46.053530    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0721 23:49:46.054196    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0721 23:49:46.055555    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0721 23:49:46.055555    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0721 23:49:46.055555    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0721 23:49:46.056169    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0721 23:49:46.056925    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0721 23:49:46.057723    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0721 23:49:46.057723    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0721 23:49:46.057723    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0721 23:49:46.059166    3296 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-264400 san=[127.0.0.1 172.28.193.97 functional-264400 localhost minikube]
	I0721 23:49:46.255062    3296 provision.go:177] copyRemoteCerts
	I0721 23:49:46.265961    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:49:46.265961    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:48.459327    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:48.459802    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:48.459881    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:51.076501    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:51.076501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:51.078136    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:49:51.186062    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.919744s)
	I0721 23:49:51.186137    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0721 23:49:51.186285    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0721 23:49:51.234406    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0721 23:49:51.234628    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0721 23:49:51.286824    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0721 23:49:51.286998    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0721 23:49:51.338595    3296 provision.go:87] duration metric: took 14.8051233s to configureAuth
	I0721 23:49:51.338595    3296 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:49:51.339337    3296 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:49:51.339480    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:53.533831    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:53.534329    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:53.534329    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:56.137211    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:56.137352    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:56.143046    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:56.143046    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:56.143046    3296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0721 23:49:56.285359    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0721 23:49:56.285421    3296 buildroot.go:70] root file system type: tmpfs
	I0721 23:49:56.285723    3296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0721 23:49:56.285723    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:58.466808    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:58.466808    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:58.467788    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:01.029845    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:01.030501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:01.036243    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:01.036485    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:01.036485    3296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0721 23:50:01.200267    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0721 23:50:01.200414    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:06.025859    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:06.025859    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:06.031850    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:06.032255    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:06.032255    3296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0721 23:50:06.194379    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:50:06.194379    3296 machine.go:97] duration metric: took 44.5622996s to provisionDockerMachine
	I0721 23:50:06.194379    3296 start.go:293] postStartSetup for "functional-264400" (driver="hyperv")
	I0721 23:50:06.194379    3296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:50:06.209650    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:50:06.209650    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:08.393053    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:08.393053    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:08.393698    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:10.989526    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:10.989526    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:10.989613    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:11.100095    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8903841s)
	I0721 23:50:11.113697    3296 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:50:11.120917    3296 command_runner.go:130] > NAME=Buildroot
	I0721 23:50:11.120917    3296 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0721 23:50:11.120917    3296 command_runner.go:130] > ID=buildroot
	I0721 23:50:11.120917    3296 command_runner.go:130] > VERSION_ID=2023.02.9
	I0721 23:50:11.120999    3296 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0721 23:50:11.121050    3296 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:50:11.121050    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0721 23:50:11.121510    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0721 23:50:11.122518    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0721 23:50:11.122575    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0721 23:50:11.123543    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts -> hosts in /etc/test/nested/copy/5100
	I0721 23:50:11.123618    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts -> /etc/test/nested/copy/5100/hosts
	I0721 23:50:11.133586    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/5100
	I0721 23:50:11.152687    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0721 23:50:11.202971    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts --> /etc/test/nested/copy/5100/hosts (40 bytes)
	I0721 23:50:11.255289    3296 start.go:296] duration metric: took 5.0608472s for postStartSetup
	I0721 23:50:11.255289    3296 fix.go:56] duration metric: took 52.4546661s for fixHost
	I0721 23:50:11.255289    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:13.434592    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:13.434592    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:13.435305    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:16.055310    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:16.055310    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:16.061461    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:16.061461    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:16.061461    3296 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:50:16.203294    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605816.220500350
	
	I0721 23:50:16.203389    3296 fix.go:216] guest clock: 1721605816.220500350
	I0721 23:50:16.203389    3296 fix.go:229] Guest: 2024-07-21 23:50:16.22050035 +0000 UTC Remote: 2024-07-21 23:50:11.2552893 +0000 UTC m=+58.166615301 (delta=4.96521105s)
	I0721 23:50:16.203490    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:18.378670    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:18.378670    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:18.378758    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:21.010405    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:21.010405    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:21.016091    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:21.016289    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:21.016289    3296 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721605816
	I0721 23:50:21.170845    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Jul 21 23:50:16 UTC 2024
	
	I0721 23:50:21.171182    3296 fix.go:236] clock set: Sun Jul 21 23:50:16 UTC 2024
	 (err=<nil>)
	I0721 23:50:21.171182    3296 start.go:83] releasing machines lock for "functional-264400", held for 1m2.3714351s
	I0721 23:50:21.171265    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:23.395806    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:23.395850    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:23.395850    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:26.024178    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:26.024178    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:26.028577    3296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0721 23:50:26.028739    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:26.043523    3296 ssh_runner.go:195] Run: cat /version.json
	I0721 23:50:26.043523    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:28.404030    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:28.404030    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:31.161323    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:31.161323    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:31.162685    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:31.218457    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:31.219166    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:31.219224    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:31.264926    3296 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0721 23:50:31.265653    3296 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.236833s)
	W0721 23:50:31.265743    3296 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0721 23:50:31.313118    3296 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0721 23:50:31.313718    3296 ssh_runner.go:235] Completed: cat /version.json: (5.2701285s)
	I0721 23:50:31.326430    3296 ssh_runner.go:195] Run: systemctl --version
	I0721 23:50:31.335559    3296 command_runner.go:130] > systemd 252 (252)
	I0721 23:50:31.335630    3296 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0721 23:50:31.347271    3296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0721 23:50:31.356110    3296 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0721 23:50:31.356110    3296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:50:31.367122    3296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0721 23:50:31.377018    3296 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0721 23:50:31.377190    3296 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0721 23:50:31.390830    3296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0721 23:50:31.390919    3296 start.go:495] detecting cgroup driver to use...
	I0721 23:50:31.391177    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:50:31.430605    3296 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0721 23:50:31.443163    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0721 23:50:31.473200    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0721 23:50:31.495064    3296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0721 23:50:31.505345    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0721 23:50:31.537330    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:50:31.570237    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0721 23:50:31.603641    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:50:31.634289    3296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:50:31.667749    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0721 23:50:31.699347    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0721 23:50:31.728970    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0721 23:50:31.758020    3296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:50:31.777872    3296 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0721 23:50:31.788667    3296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:50:31.817394    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:50:32.095962    3296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0721 23:50:32.129994    3296 start.go:495] detecting cgroup driver to use...
	I0721 23:50:32.144084    3296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0721 23:50:32.171240    3296 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0721 23:50:32.171513    3296 command_runner.go:130] > [Unit]
	I0721 23:50:32.171513    3296 command_runner.go:130] > Description=Docker Application Container Engine
	I0721 23:50:32.171513    3296 command_runner.go:130] > Documentation=https://docs.docker.com
	I0721 23:50:32.171513    3296 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0721 23:50:32.171513    3296 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0721 23:50:32.171626    3296 command_runner.go:130] > StartLimitBurst=3
	I0721 23:50:32.171626    3296 command_runner.go:130] > StartLimitIntervalSec=60
	I0721 23:50:32.171626    3296 command_runner.go:130] > [Service]
	I0721 23:50:32.171626    3296 command_runner.go:130] > Type=notify
	I0721 23:50:32.171626    3296 command_runner.go:130] > Restart=on-failure
	I0721 23:50:32.171626    3296 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0721 23:50:32.171697    3296 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0721 23:50:32.171697    3296 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0721 23:50:32.171697    3296 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0721 23:50:32.171697    3296 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0721 23:50:32.171697    3296 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0721 23:50:32.171763    3296 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0721 23:50:32.171763    3296 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0721 23:50:32.171763    3296 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0721 23:50:32.171763    3296 command_runner.go:130] > ExecStart=
	I0721 23:50:32.171845    3296 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0721 23:50:32.171845    3296 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0721 23:50:32.171845    3296 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0721 23:50:32.171910    3296 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0721 23:50:32.171935    3296 command_runner.go:130] > LimitNOFILE=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > LimitNPROC=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > LimitCORE=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0721 23:50:32.171968    3296 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0721 23:50:32.171968    3296 command_runner.go:130] > TasksMax=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > TimeoutStartSec=0
	I0721 23:50:32.171968    3296 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0721 23:50:32.171968    3296 command_runner.go:130] > Delegate=yes
	I0721 23:50:32.171968    3296 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0721 23:50:32.171968    3296 command_runner.go:130] > KillMode=process
	I0721 23:50:32.171968    3296 command_runner.go:130] > [Install]
	I0721 23:50:32.171968    3296 command_runner.go:130] > WantedBy=multi-user.target
	I0721 23:50:32.185323    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:50:32.222308    3296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:50:32.269127    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:50:32.308679    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 23:50:32.337720    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:50:32.373754    3296 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0721 23:50:32.387559    3296 ssh_runner.go:195] Run: which cri-dockerd
	I0721 23:50:32.393567    3296 command_runner.go:130] > /usr/bin/cri-dockerd
	I0721 23:50:32.407091    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0721 23:50:32.429103    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0721 23:50:32.473119    3296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0721 23:50:32.747171    3296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0721 23:50:32.998956    3296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0721 23:50:32.999296    3296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0721 23:50:33.051719    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:50:33.356719    3296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 23:51:44.652209    3296 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0721 23:51:44.652633    3296 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0721 23:51:44.654236    3296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.2965818s)
	I0721 23:51:44.666167    3296 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.168118118Z" level=info msg="Starting up"
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.169181481Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.170711772Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.204506281Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239101537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239202743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239269947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239286548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239363452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239504161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239689572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239796878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239818179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239829580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240023691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240532022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243523700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243618405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244010128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244130936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244288745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244514558Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274608247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274731654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274757156Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274774157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274806859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.698341    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275350391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.698341    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275567104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275667010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275688011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275707112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275721313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275742514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275764116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275780417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275794017Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275807418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698670    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275819619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275876422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275890923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698817    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275939726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698817    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275958027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275970928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275983929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275997230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276018931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276036232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276049233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276066634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276084135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276105336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699059    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276119437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699059    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276132038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276357651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276454457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276513660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699204    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276580764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.699204    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276655869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699260    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276712372Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.699260    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276762075Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277188900Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277433015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277589224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.278054352Z" level=info msg="containerd successfully booted in 0.074903s"
	I0721 23:51:44.699388    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.247751721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.277834397Z" level=info msg="Loading containers: start."
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.441509517Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.655815314Z" level=info msg="Loading containers: done."
	I0721 23:51:44.699472    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676595884Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.699498    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676745891Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.699498    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788327964Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788443669Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.978875672Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980386251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980770345Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980878444Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980936643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.044964117Z" level=info msg="Starting up"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.046051302Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.047547081Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1095
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.077138071Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103738503Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103854902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103894101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103909101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103931301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103942600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104085398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104215897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104236396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104246796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104289796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104467393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108266041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108366439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108599936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108922331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109041730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109088329Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109284326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109335126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109351726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109365825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109378125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109446524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110271513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110431611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110840005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110866105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110891004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110910804Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110947503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700871    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110983003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700871    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111002703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111019702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111038702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111054502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111096201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111137101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111175900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111189300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111205600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111236299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111251899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111274399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111294599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111330498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111345998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111376797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111394397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111421297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111457096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111535995Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111594594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111638794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111653394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111706593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111722293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111736992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111747992Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112862377Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112947276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113020375Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113041274Z" level=info msg="containerd successfully booted in 0.036788s"
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.102172085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.122803299Z" level=info msg="Loading containers: start."
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.249728942Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.363421569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.454819504Z" level=info msg="Loading containers: done."
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478314979Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478440677Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523349955Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.701834    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523496853Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.403414153Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.404940232Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405762121Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405911219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405963218Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.702027    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 dockerd[1439]: time="2024-07-21T23:48:27.488211040Z" level=info msg="Starting up"
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.283164837Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.284334421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1445
	I0721 23:51:44.702206    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.322546392Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.353969657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354127155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354245353Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.702289    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354279453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354386052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354424751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354988043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355091642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355116141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355128941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355204740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355558335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359334983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359441882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359612579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359749577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359878975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359993174Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360138772Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360266770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.702688    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360289170Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360306770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360434168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360490167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360944161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361072859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361207757Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361229957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361245657Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361275356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361389255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361429254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361568652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361594052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361609452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361622451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361656951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361680651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361901447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361999446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362019946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362046645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362061445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362075845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362092245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362111045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362124244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362136944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362154644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362178044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362192043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362211643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362342741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362390341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362406041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362418640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362429040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362444140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.703684    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362455640Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.703715    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362742536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.703715    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362893434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362971133Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362995232Z" level=info msg="containerd successfully booted in 0.041146s"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:29 functional-264400 dockerd[1439]: time="2024-07-21T23:48:29.329544955Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.660319456Z" level=info msg="Loading containers: start."
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.796232675Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.907798631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.001144539Z" level=info msg="Loading containers: done."
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022589743Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022719941Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067087927Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067159926Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203705562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203993309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204174339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204501992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275055860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275220587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275259793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275372211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333574371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333646683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333744099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333850816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416645674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416770094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416839505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.417133553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625603538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625875582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704442    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625899586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704622    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.626009704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704779    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776176512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776370643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776546172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.835904420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836147160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836225472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836649541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887079538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887333179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887543914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887899671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.134772975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141087657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141198860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141750876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576099088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576165990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576179490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576332795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.700943823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701110428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701133028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705465    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701305233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705465    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.251787691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705516    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252007895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705516    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252034496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252193199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.458949480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459063270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459134864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459296351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733493277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733949139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734221216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734462295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.409481815Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.656026383Z" level=info msg="ignoring event" container=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.657306959Z" level=info msg="shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658560636Z" level=warning msg="cleaning up after shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658678934Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.676709403Z" level=info msg="ignoring event" container=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.677164894Z" level=info msg="shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678209575Z" level=warning msg="cleaning up after shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678304373Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.695081165Z" level=info msg="ignoring event" container=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695302161Z" level=info msg="shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695385859Z" level=warning msg="cleaning up after shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695446458Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.701015856Z" level=info msg="ignoring event" container=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.702594427Z" level=info msg="shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704149698Z" level=warning msg="cleaning up after shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	I0721 23:51:44.706258    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704221897Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.728693847Z" level=info msg="shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729328035Z" level=info msg="ignoring event" container=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729433134Z" level=info msg="ignoring event" container=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.731072903Z" level=warning msg="cleaning up after shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.734341743Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.734844834Z" level=info msg="ignoring event" container=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.735006831Z" level=info msg="ignoring event" container=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.743975166Z" level=info msg="shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744093164Z" level=warning msg="cleaning up after shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744205762Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.730359917Z" level=info msg="shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751792823Z" level=warning msg="cleaning up after shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751834022Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759660178Z" level=info msg="ignoring event" container=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759862574Z" level=info msg="ignoring event" container=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760069570Z" level=info msg="ignoring event" container=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760281966Z" level=info msg="ignoring event" container=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760277567Z" level=info msg="shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760380865Z" level=warning msg="cleaning up after shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760394364Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.748823577Z" level=info msg="shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765443172Z" level=warning msg="cleaning up after shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765461071Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769325900Z" level=info msg="shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769546096Z" level=warning msg="cleaning up after shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769827691Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706951    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774921997Z" level=info msg="shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	I0721 23:51:44.706951    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774984396Z" level=warning msg="cleaning up after shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	I0721 23:51:44.706995    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774997396Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788278152Z" level=info msg="shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788393450Z" level=warning msg="cleaning up after shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788444649Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.846647379Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1439]: time="2024-07-21T23:50:38.541510181Z" level=info msg="ignoring event" container=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.544122633Z" level=info msg="shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545450508Z" level=warning msg="cleaning up after shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545830901Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.461769452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.504142282Z" level=info msg="ignoring event" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504338210Z" level=info msg="shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504430323Z" level=warning msg="cleaning up after shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504443725Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.578959353Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579851478Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579966294Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.580111114Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Consumed 5.235s CPU time.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 dockerd[4061]: time="2024-07-21T23:50:44.647231378Z" level=info msg="Starting up"
	I0721 23:51:44.707551    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 dockerd[4061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0721 23:51:44.707551    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0721 23:51:44.707593    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0721 23:51:44.707593    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	I0721 23:51:44.736086    3296 out.go:177] 
	W0721 23:51:44.740389    3296 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 21 23:47:42 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.168118118Z" level=info msg="Starting up"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.169181481Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.170711772Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.204506281Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239101537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239202743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239269947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239286548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239363452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239504161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239689572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239796878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239818179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239829580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240023691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240532022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243523700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243618405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244010128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244130936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244288745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244514558Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274608247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274731654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274757156Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274774157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274806859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275350391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275567104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275667010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275688011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275707112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275721313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275742514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275764116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275780417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275794017Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275807418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275819619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275876422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275890923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275939726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275958027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275970928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275983929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275997230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276018931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276036232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276049233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276066634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276084135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276105336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276119437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276132038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276357651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276454457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276513660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276580764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276655869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276712372Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276762075Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277188900Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277433015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277589224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.278054352Z" level=info msg="containerd successfully booted in 0.074903s"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.247751721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.277834397Z" level=info msg="Loading containers: start."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.441509517Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.655815314Z" level=info msg="Loading containers: done."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676595884Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676745891Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788327964Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788443669Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:47:43 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.978875672Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:14 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980386251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980770345Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980878444Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980936643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:15 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:15 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:15 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.044964117Z" level=info msg="Starting up"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.046051302Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.047547081Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1095
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.077138071Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103738503Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103854902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103894101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103909101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103931301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103942600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104085398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104215897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104236396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104246796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104289796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104467393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108266041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108366439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108599936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108922331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109041730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109088329Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109284326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109335126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109351726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109365825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109378125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109446524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110271513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110431611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110840005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110866105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110891004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110910804Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110947503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110983003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111002703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111019702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111038702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111054502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111096201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111137101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111175900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111189300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111205600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111236299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111251899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111274399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111294599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111330498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111345998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111376797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111394397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111421297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111457096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111535995Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111594594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111638794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111653394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111706593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111722293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111736992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111747992Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112862377Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112947276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113020375Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113041274Z" level=info msg="containerd successfully booted in 0.036788s"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.102172085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.122803299Z" level=info msg="Loading containers: start."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.249728942Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.363421569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.454819504Z" level=info msg="Loading containers: done."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478314979Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478440677Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523349955Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:17 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523496853Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:26 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.403414153Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.404940232Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405762121Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405911219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405963218Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:27 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:27 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:27 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:27 functional-264400 dockerd[1439]: time="2024-07-21T23:48:27.488211040Z" level=info msg="Starting up"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.283164837Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.284334421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1445
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.322546392Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.353969657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354127155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354245353Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354279453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354386052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354424751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354988043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355091642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355116141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355128941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355204740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355558335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359334983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359441882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359612579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359749577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359878975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359993174Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360138772Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360266770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360289170Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360306770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360434168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360490167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360944161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361072859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361207757Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361229957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361245657Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361275356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361389255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361429254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361568652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361594052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361609452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361622451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361656951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361680651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361901447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361999446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362019946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362046645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362061445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362075845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362092245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362111045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362124244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362136944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362154644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362178044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362192043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362211643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362342741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362390341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362406041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362418640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362429040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362444140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362455640Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362742536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362893434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362971133Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362995232Z" level=info msg="containerd successfully booted in 0.041146s"
	Jul 21 23:48:29 functional-264400 dockerd[1439]: time="2024-07-21T23:48:29.329544955Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.660319456Z" level=info msg="Loading containers: start."
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.796232675Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.907798631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.001144539Z" level=info msg="Loading containers: done."
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022589743Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022719941Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067087927Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067159926Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:33 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203705562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203993309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204174339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204501992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275055860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275220587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275259793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275372211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333574371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333646683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333744099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333850816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416645674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416770094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416839505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.417133553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625603538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625875582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625899586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.626009704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776176512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776370643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776546172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.835904420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836147160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836225472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836649541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887079538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887333179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887543914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887899671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.134772975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141087657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141198860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141750876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576099088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576165990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576179490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576332795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.700943823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701110428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701133028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701305233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.251787691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252007895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252034496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252193199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.458949480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459063270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459134864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459296351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733493277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733949139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734221216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734462295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:50:33 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.409481815Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.656026383Z" level=info msg="ignoring event" container=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.657306959Z" level=info msg="shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658560636Z" level=warning msg="cleaning up after shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658678934Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.676709403Z" level=info msg="ignoring event" container=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.677164894Z" level=info msg="shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678209575Z" level=warning msg="cleaning up after shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678304373Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.695081165Z" level=info msg="ignoring event" container=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695302161Z" level=info msg="shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695385859Z" level=warning msg="cleaning up after shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695446458Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.701015856Z" level=info msg="ignoring event" container=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.702594427Z" level=info msg="shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704149698Z" level=warning msg="cleaning up after shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704221897Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.728693847Z" level=info msg="shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729328035Z" level=info msg="ignoring event" container=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729433134Z" level=info msg="ignoring event" container=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.731072903Z" level=warning msg="cleaning up after shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.734341743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.734844834Z" level=info msg="ignoring event" container=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.735006831Z" level=info msg="ignoring event" container=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.743975166Z" level=info msg="shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744093164Z" level=warning msg="cleaning up after shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744205762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.730359917Z" level=info msg="shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751792823Z" level=warning msg="cleaning up after shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751834022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759660178Z" level=info msg="ignoring event" container=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759862574Z" level=info msg="ignoring event" container=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760069570Z" level=info msg="ignoring event" container=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760281966Z" level=info msg="ignoring event" container=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760277567Z" level=info msg="shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760380865Z" level=warning msg="cleaning up after shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760394364Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.748823577Z" level=info msg="shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765443172Z" level=warning msg="cleaning up after shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765461071Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769325900Z" level=info msg="shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769546096Z" level=warning msg="cleaning up after shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769827691Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774921997Z" level=info msg="shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774984396Z" level=warning msg="cleaning up after shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774997396Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788278152Z" level=info msg="shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788393450Z" level=warning msg="cleaning up after shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788444649Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.846647379Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1439]: time="2024-07-21T23:50:38.541510181Z" level=info msg="ignoring event" container=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.544122633Z" level=info msg="shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545450508Z" level=warning msg="cleaning up after shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545830901Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.461769452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.504142282Z" level=info msg="ignoring event" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504338210Z" level=info msg="shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504430323Z" level=warning msg="cleaning up after shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504443725Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.578959353Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579851478Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579966294Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.580111114Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:50:44 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Consumed 5.235s CPU time.
	Jul 21 23:50:44 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:50:44 functional-264400 dockerd[4061]: time="2024-07-21T23:50:44.647231378Z" level=info msg="Starting up"
	Jul 21 23:51:44 functional-264400 dockerd[4061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 21 23:51:44 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0721 23:51:44.740966    3296 out.go:239] * 
	W0721 23:51:44.742865    3296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 23:51:44.751682    3296 out.go:177] 
	
	
	==> Docker <==
	Jul 21 23:55:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:55:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab'"
	Jul 21 23:55:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:55:45Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 21 23:55:45 functional-264400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jul 21 23:55:45 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:55:45 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:55:45 functional-264400 dockerd[5358]: time="2024-07-21T23:55:45.828643179Z" level=info msg="Starting up"
	Jul 21 23:56:45 functional-264400 dockerd[5358]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 21 23:56:45 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 21 23:56:45 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 21 23:56:45 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="error getting RW layer size for container ID 'fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab'"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="error getting RW layer size for container ID '6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084'"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="error getting RW layer size for container ID '62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789'"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="error getting RW layer size for container ID 'd3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf'"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="error getting RW layer size for container ID 'c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5'"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="error getting RW layer size for container ID '46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd'"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="error getting RW layer size for container ID '74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559'"
	Jul 21 23:56:45 functional-264400 cri-dockerd[1342]: time="2024-07-21T23:56:45Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-21T23:56:45Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul21 23:48] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.098748] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.521494] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.200309] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.246957] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +2.856365] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.199871] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.217213] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.265319] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[  +7.860794] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[  +0.119892] kauditd_printk_skb: 202 callbacks suppressed
	[  +6.328518] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.744596] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.374282] systemd-fstab-generator[1877]: Ignoring "noauto" option for root device
	[  +0.101703] kauditd_printk_skb: 48 callbacks suppressed
	[  +8.037046] systemd-fstab-generator[2275]: Ignoring "noauto" option for root device
	[  +0.135108] kauditd_printk_skb: 62 callbacks suppressed
	[Jul21 23:49] systemd-fstab-generator[2503]: Ignoring "noauto" option for root device
	[  +0.181805] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.421058] kauditd_printk_skb: 71 callbacks suppressed
	[Jul21 23:50] systemd-fstab-generator[3581]: Ignoring "noauto" option for root device
	[  +0.640674] systemd-fstab-generator[3616]: Ignoring "noauto" option for root device
	[  +0.278009] systemd-fstab-generator[3628]: Ignoring "noauto" option for root device
	[  +0.318270] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[  +5.355152] kauditd_printk_skb: 91 callbacks suppressed
	
	
	==> kernel <==
	 23:57:46 up 11 min,  0 users,  load average: 0.00, 0.09, 0.08
	Linux functional-264400 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 21 23:57:39 functional-264400 kubelet[2282]: E0721 23:57:39.507723    2282 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-264400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused"
	Jul 21 23:57:39 functional-264400 kubelet[2282]: E0721 23:57:39.509190    2282 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-264400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused"
	Jul 21 23:57:39 functional-264400 kubelet[2282]: E0721 23:57:39.510268    2282 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-264400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused"
	Jul 21 23:57:39 functional-264400 kubelet[2282]: E0721 23:57:39.511649    2282 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-264400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused"
	Jul 21 23:57:39 functional-264400 kubelet[2282]: E0721 23:57:39.511786    2282 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 21 23:57:39 functional-264400 kubelet[2282]: E0721 23:57:39.666423    2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused" interval="7s"
	Jul 21 23:57:41 functional-264400 kubelet[2282]: E0721 23:57:41.036716    2282 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 7m8.365011986s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 21 23:57:43 functional-264400 kubelet[2282]: E0721 23:57:43.350455    2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-264400.17e45f62791f0602\": dial tcp 172.28.193.97:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-264400.17e45f62791f0602  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-264400,UID:d4a646c87acc77b79c334272b81f6958,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.28.193.97:8441/readyz\": dial tcp 172.28.193.97:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-264400,},FirstTimestamp:2024-07-21 23:50:34.105882114 +0000 UTC m=+105.975509982,LastTimestam
p:2024-07-21 23:50:36.107073829 +0000 UTC m=+107.976701697,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-264400,}"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.037901    2282 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 7m13.366195075s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.069676    2282 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.070891    2282 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.070920    2282 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.070998    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.071029    2282 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.071145    2282 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.071171    2282 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: I0721 23:57:46.071184    2282 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.071217    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.071238    2282 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.071308    2282 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.071328    2282 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.071340    2282 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.071698    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.071725    2282 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 21 23:57:46 functional-264400 kubelet[2282]: E0721 23:57:46.072093    2282 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:55:20.824469   12748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0721 23:55:45.562028   12748 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:55:45.595916   12748 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:55:45.624738   12748 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:55:45.653301   12748 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:55:45.683931   12748 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:55:45.713254   12748 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:55:45.746366   12748 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0721 23:56:45.843066   12748 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-264400 -n functional-264400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-264400 -n functional-264400: exit status 2 (12.1724793s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:57:46.841069   11448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-264400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (180.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 ssh sudo crictl images: exit status 1 (11.7364135s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:04:49.085970    6236 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1122: failed to get images by "out/minikube-windows-amd64.exe -p functional-264400 ssh sudo crictl images" ssh exit status 1
functional_test.go:1126: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:04:49.085970    6236 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (179.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 1 (47.4056497s)

                                                
                                                
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:05:00.821284   11916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1146: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-264400 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 1
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.5274314s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:05:48.219283    6668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 cache reload: (1m49.0087667s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.5212968s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:07:48.760859    8004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1161: expected "out/minikube-windows-amd64.exe -p functional-264400 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (179.47s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (181.24s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 kubectl -- --context functional-264400 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 kubectl -- --context functional-264400 get pods: exit status 1 (10.7160697s)

                                                
                                                
** stderr ** 
	W0722 00:11:02.312804    8940 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0722 00:11:04.703890    4656 memcache.go:265] couldn't get current server API group list: Get "https://172.28.193.97:8441/api?timeout=32s": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.
	E0722 00:11:06.804050    4656 memcache.go:265] couldn't get current server API group list: Get "https://172.28.193.97:8441/api?timeout=32s": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.
	E0722 00:11:08.834945    4656 memcache.go:265] couldn't get current server API group list: Get "https://172.28.193.97:8441/api?timeout=32s": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.
	E0722 00:11:10.860221    4656 memcache.go:265] couldn't get current server API group list: Get "https://172.28.193.97:8441/api?timeout=32s": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.
	E0722 00:11:12.896947    4656 memcache.go:265] couldn't get current server API group list: Get "https://172.28.193.97:8441/api?timeout=32s": dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.28.193.97:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-264400 kubectl -- --context functional-264400 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-264400 -n functional-264400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-264400 -n functional-264400: exit status 2 (12.2262822s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:11:13.041645    9584 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 logs -n 25: (2m25.1564341s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:43 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-420400                                            | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	| start   | -p functional-264400                                        | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:49 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-264400                                        | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:49 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache add                                 | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:57 UTC | 21 Jul 24 23:59 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache add                                 | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:59 UTC | 22 Jul 24 00:01 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache add                                 | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:01 UTC | 22 Jul 24 00:03 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache add                                 | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:03 UTC | 22 Jul 24 00:04 UTC |
	|         | minikube-local-cache-test:functional-264400                 |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache delete                              | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:04 UTC | 22 Jul 24 00:04 UTC |
	|         | minikube-local-cache-test:functional-264400                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:04 UTC | 22 Jul 24 00:04 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:04 UTC | 22 Jul 24 00:04 UTC |
	| ssh     | functional-264400 ssh sudo                                  | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:04 UTC |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-264400                                           | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:05 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-264400 ssh                                       | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:05 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache reload                              | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:05 UTC | 22 Jul 24 00:07 UTC |
	| ssh     | functional-264400 ssh                                       | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:07 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:08 UTC | 22 Jul 24 00:08 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:08 UTC | 22 Jul 24 00:08 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-264400 kubectl --                                | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:11 UTC |                     |
	|         | --context functional-264400                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:49:13
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:49:13.243734    3296 out.go:291] Setting OutFile to fd 632 ...
	I0721 23:49:13.245089    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:49:13.245089    3296 out.go:304] Setting ErrFile to fd 612...
	I0721 23:49:13.245225    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:49:13.271799    3296 out.go:298] Setting JSON to false
	I0721 23:49:13.274576    3296 start.go:129] hostinfo: {"hostname":"minikube6","uptime":120960,"bootTime":1721484792,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0721 23:49:13.275656    3296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 23:49:13.279846    3296 out.go:177] * [functional-264400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0721 23:49:13.284743    3296 notify.go:220] Checking for updates...
	I0721 23:49:13.286577    3296 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0721 23:49:13.288761    3296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:49:13.292203    3296 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0721 23:49:13.295335    3296 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:49:13.299523    3296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:49:13.304221    3296 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:49:13.304533    3296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:49:18.732833    3296 out.go:177] * Using the hyperv driver based on existing profile
	I0721 23:49:18.737459    3296 start.go:297] selected driver: hyperv
	I0721 23:49:18.737459    3296 start.go:901] validating driver "hyperv" against &{Name:functional-264400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-264400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.97 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:49:18.737459    3296 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:49:18.788011    3296 cni.go:84] Creating CNI manager for ""
	I0721 23:49:18.788078    3296 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 23:49:18.788279    3296 start.go:340] cluster config:
	{Name:functional-264400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-264400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.97 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:49:18.788712    3296 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:49:18.793652    3296 out.go:177] * Starting "functional-264400" primary control-plane node in "functional-264400" cluster
	I0721 23:49:18.796373    3296 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 23:49:18.796558    3296 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 23:49:18.796558    3296 cache.go:56] Caching tarball of preloaded images
	I0721 23:49:18.796558    3296 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 23:49:18.796558    3296 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 23:49:18.797352    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\config.json ...
	I0721 23:49:18.798980    3296 start.go:360] acquireMachinesLock for functional-264400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:49:18.798980    3296 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-264400"
	I0721 23:49:18.798980    3296 start.go:96] Skipping create...Using existing machine configuration
	I0721 23:49:18.799979    3296 fix.go:54] fixHost starting: 
	I0721 23:49:18.799979    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:21.623488    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:21.623488    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:21.623488    3296 fix.go:112] recreateIfNeeded on functional-264400: state=Running err=<nil>
	W0721 23:49:21.623488    3296 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 23:49:21.629305    3296 out.go:177] * Updating the running hyperv "functional-264400" VM ...
	I0721 23:49:21.631533    3296 machine.go:94] provisionDockerMachine start ...
	I0721 23:49:21.631533    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:23.852288    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:23.852288    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:23.852522    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:26.470028    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:26.470028    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:26.478442    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:26.479167    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:26.479167    3296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0721 23:49:26.622339    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-264400
	
	I0721 23:49:26.622466    3296 buildroot.go:166] provisioning hostname "functional-264400"
	I0721 23:49:26.622607    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:28.824177    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:28.824557    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:28.824557    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:31.414467    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:31.414467    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:31.420319    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:31.421099    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:31.421099    3296 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-264400 && echo "functional-264400" | sudo tee /etc/hostname
	I0721 23:49:31.588774    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-264400
	
	I0721 23:49:31.588774    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:33.790066    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:33.790474    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:33.790474    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:36.394275    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:36.394275    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:36.399837    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:36.400299    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:36.400299    3296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-264400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-264400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-264400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:49:36.533255    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:49:36.533255    3296 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0721 23:49:36.533255    3296 buildroot.go:174] setting up certificates
	I0721 23:49:36.533255    3296 provision.go:84] configureAuth start
	I0721 23:49:36.533977    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:38.735744    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:38.735744    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:38.736834    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:41.319431    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:41.319431    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:41.319569    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:46.052701    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:46.052760    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:46.052760    3296 provision.go:143] copyHostCerts
	I0721 23:49:46.052760    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0721 23:49:46.053530    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0721 23:49:46.053530    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0721 23:49:46.054196    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0721 23:49:46.055555    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0721 23:49:46.055555    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0721 23:49:46.055555    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0721 23:49:46.056169    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0721 23:49:46.056925    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0721 23:49:46.057723    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0721 23:49:46.057723    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0721 23:49:46.057723    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0721 23:49:46.059166    3296 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-264400 san=[127.0.0.1 172.28.193.97 functional-264400 localhost minikube]
	I0721 23:49:46.255062    3296 provision.go:177] copyRemoteCerts
	I0721 23:49:46.265961    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:49:46.265961    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:48.459327    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:48.459802    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:48.459881    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:51.076501    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:51.076501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:51.078136    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:49:51.186062    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.919744s)
	I0721 23:49:51.186137    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0721 23:49:51.186285    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0721 23:49:51.234406    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0721 23:49:51.234628    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0721 23:49:51.286824    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0721 23:49:51.286998    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0721 23:49:51.338595    3296 provision.go:87] duration metric: took 14.8051233s to configureAuth
	I0721 23:49:51.338595    3296 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:49:51.339337    3296 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:49:51.339480    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:53.533831    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:53.534329    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:53.534329    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:56.137211    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:56.137352    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:56.143046    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:56.143046    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:56.143046    3296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0721 23:49:56.285359    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0721 23:49:56.285421    3296 buildroot.go:70] root file system type: tmpfs
	I0721 23:49:56.285723    3296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0721 23:49:56.285723    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:58.466808    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:58.466808    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:58.467788    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:01.029845    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:01.030501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:01.036243    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:01.036485    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:01.036485    3296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0721 23:50:01.200267    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0721 23:50:01.200414    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:06.025859    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:06.025859    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:06.031850    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:06.032255    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:06.032255    3296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0721 23:50:06.194379    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:50:06.194379    3296 machine.go:97] duration metric: took 44.5622996s to provisionDockerMachine
	I0721 23:50:06.194379    3296 start.go:293] postStartSetup for "functional-264400" (driver="hyperv")
	I0721 23:50:06.194379    3296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:50:06.209650    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:50:06.209650    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:08.393053    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:08.393053    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:08.393698    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:10.989526    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:10.989526    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:10.989613    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:11.100095    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8903841s)
	I0721 23:50:11.113697    3296 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:50:11.120917    3296 command_runner.go:130] > NAME=Buildroot
	I0721 23:50:11.120917    3296 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0721 23:50:11.120917    3296 command_runner.go:130] > ID=buildroot
	I0721 23:50:11.120917    3296 command_runner.go:130] > VERSION_ID=2023.02.9
	I0721 23:50:11.120999    3296 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0721 23:50:11.121050    3296 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:50:11.121050    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0721 23:50:11.121510    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0721 23:50:11.122518    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0721 23:50:11.122575    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0721 23:50:11.123543    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts -> hosts in /etc/test/nested/copy/5100
	I0721 23:50:11.123618    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts -> /etc/test/nested/copy/5100/hosts
	I0721 23:50:11.133586    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/5100
	I0721 23:50:11.152687    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0721 23:50:11.202971    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts --> /etc/test/nested/copy/5100/hosts (40 bytes)
	I0721 23:50:11.255289    3296 start.go:296] duration metric: took 5.0608472s for postStartSetup
	I0721 23:50:11.255289    3296 fix.go:56] duration metric: took 52.4546661s for fixHost
	I0721 23:50:11.255289    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:13.434592    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:13.434592    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:13.435305    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:16.055310    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:16.055310    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:16.061461    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:16.061461    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:16.061461    3296 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:50:16.203294    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605816.220500350
	
	I0721 23:50:16.203389    3296 fix.go:216] guest clock: 1721605816.220500350
	I0721 23:50:16.203389    3296 fix.go:229] Guest: 2024-07-21 23:50:16.22050035 +0000 UTC Remote: 2024-07-21 23:50:11.2552893 +0000 UTC m=+58.166615301 (delta=4.96521105s)
	I0721 23:50:16.203490    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:18.378670    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:18.378670    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:18.378758    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:21.010405    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:21.010405    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:21.016091    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:21.016289    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:21.016289    3296 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721605816
	I0721 23:50:21.170845    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Jul 21 23:50:16 UTC 2024
	
	I0721 23:50:21.171182    3296 fix.go:236] clock set: Sun Jul 21 23:50:16 UTC 2024
	 (err=<nil>)
	I0721 23:50:21.171182    3296 start.go:83] releasing machines lock for "functional-264400", held for 1m2.3714351s
	I0721 23:50:21.171265    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:23.395806    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:23.395850    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:23.395850    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:26.024178    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:26.024178    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:26.028577    3296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0721 23:50:26.028739    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:26.043523    3296 ssh_runner.go:195] Run: cat /version.json
	I0721 23:50:26.043523    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:28.404030    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:28.404030    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:31.161323    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:31.161323    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:31.162685    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:31.218457    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:31.219166    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:31.219224    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:31.264926    3296 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0721 23:50:31.265653    3296 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.236833s)
	W0721 23:50:31.265743    3296 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0721 23:50:31.313118    3296 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0721 23:50:31.313718    3296 ssh_runner.go:235] Completed: cat /version.json: (5.2701285s)
	I0721 23:50:31.326430    3296 ssh_runner.go:195] Run: systemctl --version
	I0721 23:50:31.335559    3296 command_runner.go:130] > systemd 252 (252)
	I0721 23:50:31.335630    3296 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0721 23:50:31.347271    3296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0721 23:50:31.356110    3296 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0721 23:50:31.356110    3296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:50:31.367122    3296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0721 23:50:31.377018    3296 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0721 23:50:31.377190    3296 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0721 23:50:31.390830    3296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0721 23:50:31.390919    3296 start.go:495] detecting cgroup driver to use...
	I0721 23:50:31.391177    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:50:31.430605    3296 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0721 23:50:31.443163    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0721 23:50:31.473200    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0721 23:50:31.495064    3296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0721 23:50:31.505345    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0721 23:50:31.537330    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:50:31.570237    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0721 23:50:31.603641    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:50:31.634289    3296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:50:31.667749    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0721 23:50:31.699347    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0721 23:50:31.728970    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0721 23:50:31.758020    3296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:50:31.777872    3296 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0721 23:50:31.788667    3296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:50:31.817394    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:50:32.095962    3296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0721 23:50:32.129994    3296 start.go:495] detecting cgroup driver to use...
	I0721 23:50:32.144084    3296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0721 23:50:32.171240    3296 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0721 23:50:32.171513    3296 command_runner.go:130] > [Unit]
	I0721 23:50:32.171513    3296 command_runner.go:130] > Description=Docker Application Container Engine
	I0721 23:50:32.171513    3296 command_runner.go:130] > Documentation=https://docs.docker.com
	I0721 23:50:32.171513    3296 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0721 23:50:32.171513    3296 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0721 23:50:32.171626    3296 command_runner.go:130] > StartLimitBurst=3
	I0721 23:50:32.171626    3296 command_runner.go:130] > StartLimitIntervalSec=60
	I0721 23:50:32.171626    3296 command_runner.go:130] > [Service]
	I0721 23:50:32.171626    3296 command_runner.go:130] > Type=notify
	I0721 23:50:32.171626    3296 command_runner.go:130] > Restart=on-failure
	I0721 23:50:32.171626    3296 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0721 23:50:32.171697    3296 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0721 23:50:32.171697    3296 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0721 23:50:32.171697    3296 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0721 23:50:32.171697    3296 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0721 23:50:32.171697    3296 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0721 23:50:32.171763    3296 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0721 23:50:32.171763    3296 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0721 23:50:32.171763    3296 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0721 23:50:32.171763    3296 command_runner.go:130] > ExecStart=
	I0721 23:50:32.171845    3296 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0721 23:50:32.171845    3296 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0721 23:50:32.171845    3296 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0721 23:50:32.171910    3296 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0721 23:50:32.171935    3296 command_runner.go:130] > LimitNOFILE=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > LimitNPROC=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > LimitCORE=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0721 23:50:32.171968    3296 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0721 23:50:32.171968    3296 command_runner.go:130] > TasksMax=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > TimeoutStartSec=0
	I0721 23:50:32.171968    3296 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0721 23:50:32.171968    3296 command_runner.go:130] > Delegate=yes
	I0721 23:50:32.171968    3296 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0721 23:50:32.171968    3296 command_runner.go:130] > KillMode=process
	I0721 23:50:32.171968    3296 command_runner.go:130] > [Install]
	I0721 23:50:32.171968    3296 command_runner.go:130] > WantedBy=multi-user.target
	I0721 23:50:32.185323    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:50:32.222308    3296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:50:32.269127    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:50:32.308679    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 23:50:32.337720    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:50:32.373754    3296 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0721 23:50:32.387559    3296 ssh_runner.go:195] Run: which cri-dockerd
	I0721 23:50:32.393567    3296 command_runner.go:130] > /usr/bin/cri-dockerd
	I0721 23:50:32.407091    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0721 23:50:32.429103    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0721 23:50:32.473119    3296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0721 23:50:32.747171    3296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0721 23:50:32.998956    3296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0721 23:50:32.999296    3296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0721 23:50:33.051719    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:50:33.356719    3296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 23:51:44.652209    3296 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0721 23:51:44.652633    3296 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0721 23:51:44.654236    3296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.2965818s)
	I0721 23:51:44.666167    3296 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.168118118Z" level=info msg="Starting up"
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.169181481Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.170711772Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.204506281Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239101537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239202743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239269947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239286548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239363452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239504161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239689572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239796878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239818179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239829580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240023691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240532022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243523700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243618405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244010128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244130936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244288745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244514558Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274608247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274731654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274757156Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274774157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274806859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.698341    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275350391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.698341    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275567104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275667010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275688011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275707112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275721313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275742514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275764116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275780417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275794017Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275807418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698670    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275819619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275876422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275890923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698817    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275939726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698817    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275958027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275970928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275983929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275997230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276018931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276036232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276049233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276066634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276084135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276105336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699059    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276119437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699059    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276132038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276357651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276454457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276513660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699204    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276580764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.699204    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276655869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699260    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276712372Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.699260    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276762075Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277188900Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277433015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277589224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.278054352Z" level=info msg="containerd successfully booted in 0.074903s"
	I0721 23:51:44.699388    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.247751721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.277834397Z" level=info msg="Loading containers: start."
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.441509517Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.655815314Z" level=info msg="Loading containers: done."
	I0721 23:51:44.699472    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676595884Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.699498    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676745891Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.699498    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788327964Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788443669Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.978875672Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980386251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980770345Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980878444Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980936643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.044964117Z" level=info msg="Starting up"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.046051302Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.047547081Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1095
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.077138071Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103738503Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103854902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103894101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103909101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103931301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103942600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104085398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104215897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104236396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104246796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104289796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104467393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108266041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108366439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108599936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108922331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109041730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109088329Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109284326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109335126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109351726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109365825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109378125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109446524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110271513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110431611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110840005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110866105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110891004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110910804Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110947503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700871    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110983003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700871    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111002703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111019702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111038702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111054502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111096201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111137101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111175900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111189300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111205600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111236299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111251899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111274399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111294599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111330498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111345998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111376797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111394397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111421297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111457096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111535995Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111594594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111638794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111653394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111706593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111722293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111736992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111747992Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112862377Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112947276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113020375Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113041274Z" level=info msg="containerd successfully booted in 0.036788s"
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.102172085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.122803299Z" level=info msg="Loading containers: start."
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.249728942Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.363421569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.454819504Z" level=info msg="Loading containers: done."
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478314979Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478440677Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523349955Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.701834    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523496853Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.403414153Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.404940232Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405762121Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405911219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405963218Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.702027    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 dockerd[1439]: time="2024-07-21T23:48:27.488211040Z" level=info msg="Starting up"
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.283164837Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.284334421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1445
	I0721 23:51:44.702206    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.322546392Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.353969657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354127155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354245353Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.702289    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354279453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354386052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354424751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354988043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355091642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355116141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355128941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355204740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355558335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359334983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359441882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359612579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359749577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359878975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359993174Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360138772Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360266770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.702688    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360289170Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360306770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360434168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360490167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360944161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361072859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361207757Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361229957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361245657Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361275356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361389255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361429254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361568652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361594052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361609452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361622451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361656951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361680651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361901447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361999446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362019946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362046645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362061445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362075845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362092245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362111045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362124244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362136944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362154644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362178044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362192043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362211643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362342741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362390341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362406041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362418640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362429040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362444140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.703684    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362455640Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.703715    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362742536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.703715    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362893434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362971133Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362995232Z" level=info msg="containerd successfully booted in 0.041146s"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:29 functional-264400 dockerd[1439]: time="2024-07-21T23:48:29.329544955Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.660319456Z" level=info msg="Loading containers: start."
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.796232675Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.907798631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.001144539Z" level=info msg="Loading containers: done."
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022589743Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022719941Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067087927Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067159926Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203705562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203993309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204174339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204501992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275055860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275220587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275259793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275372211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333574371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333646683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333744099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333850816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416645674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416770094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416839505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.417133553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625603538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625875582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704442    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625899586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704622    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.626009704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704779    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776176512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776370643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776546172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.835904420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836147160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836225472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836649541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887079538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887333179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887543914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887899671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.134772975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141087657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141198860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141750876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576099088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576165990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576179490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576332795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.700943823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701110428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701133028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705465    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701305233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705465    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.251787691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705516    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252007895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705516    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252034496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252193199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.458949480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459063270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459134864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459296351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733493277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733949139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734221216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734462295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.409481815Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.656026383Z" level=info msg="ignoring event" container=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.657306959Z" level=info msg="shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658560636Z" level=warning msg="cleaning up after shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658678934Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.676709403Z" level=info msg="ignoring event" container=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.677164894Z" level=info msg="shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678209575Z" level=warning msg="cleaning up after shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678304373Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.695081165Z" level=info msg="ignoring event" container=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695302161Z" level=info msg="shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695385859Z" level=warning msg="cleaning up after shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695446458Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.701015856Z" level=info msg="ignoring event" container=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.702594427Z" level=info msg="shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704149698Z" level=warning msg="cleaning up after shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	I0721 23:51:44.706258    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704221897Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.728693847Z" level=info msg="shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729328035Z" level=info msg="ignoring event" container=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729433134Z" level=info msg="ignoring event" container=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.731072903Z" level=warning msg="cleaning up after shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.734341743Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.734844834Z" level=info msg="ignoring event" container=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.735006831Z" level=info msg="ignoring event" container=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.743975166Z" level=info msg="shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744093164Z" level=warning msg="cleaning up after shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744205762Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.730359917Z" level=info msg="shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751792823Z" level=warning msg="cleaning up after shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751834022Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759660178Z" level=info msg="ignoring event" container=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759862574Z" level=info msg="ignoring event" container=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760069570Z" level=info msg="ignoring event" container=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760281966Z" level=info msg="ignoring event" container=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760277567Z" level=info msg="shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760380865Z" level=warning msg="cleaning up after shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760394364Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.748823577Z" level=info msg="shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765443172Z" level=warning msg="cleaning up after shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765461071Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769325900Z" level=info msg="shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769546096Z" level=warning msg="cleaning up after shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769827691Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706951    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774921997Z" level=info msg="shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	I0721 23:51:44.706951    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774984396Z" level=warning msg="cleaning up after shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	I0721 23:51:44.706995    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774997396Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788278152Z" level=info msg="shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788393450Z" level=warning msg="cleaning up after shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788444649Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.846647379Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1439]: time="2024-07-21T23:50:38.541510181Z" level=info msg="ignoring event" container=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.544122633Z" level=info msg="shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545450508Z" level=warning msg="cleaning up after shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545830901Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.461769452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.504142282Z" level=info msg="ignoring event" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504338210Z" level=info msg="shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504430323Z" level=warning msg="cleaning up after shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504443725Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.578959353Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579851478Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579966294Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.580111114Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Consumed 5.235s CPU time.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 dockerd[4061]: time="2024-07-21T23:50:44.647231378Z" level=info msg="Starting up"
	I0721 23:51:44.707551    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 dockerd[4061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0721 23:51:44.707551    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0721 23:51:44.707593    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0721 23:51:44.707593    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	I0721 23:51:44.736086    3296 out.go:177] 
	W0721 23:51:44.740389    3296 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 21 23:47:42 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.168118118Z" level=info msg="Starting up"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.169181481Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.170711772Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.204506281Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239101537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239202743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239269947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239286548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239363452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239504161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239689572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239796878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239818179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239829580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240023691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240532022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243523700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243618405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244010128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244130936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244288745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244514558Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274608247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274731654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274757156Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274774157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274806859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275350391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275567104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275667010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275688011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275707112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275721313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275742514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275764116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275780417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275794017Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275807418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275819619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275876422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275890923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275939726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275958027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275970928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275983929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275997230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276018931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276036232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276049233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276066634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276084135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276105336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276119437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276132038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276357651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276454457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276513660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276580764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276655869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276712372Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276762075Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277188900Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277433015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277589224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.278054352Z" level=info msg="containerd successfully booted in 0.074903s"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.247751721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.277834397Z" level=info msg="Loading containers: start."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.441509517Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.655815314Z" level=info msg="Loading containers: done."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676595884Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676745891Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788327964Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788443669Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:47:43 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.978875672Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:14 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980386251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980770345Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980878444Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980936643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:15 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:15 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:15 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.044964117Z" level=info msg="Starting up"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.046051302Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.047547081Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1095
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.077138071Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103738503Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103854902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103894101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103909101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103931301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103942600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104085398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104215897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104236396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104246796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104289796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104467393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108266041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108366439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108599936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108922331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109041730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109088329Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109284326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109335126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109351726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109365825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109378125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109446524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110271513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110431611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110840005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110866105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110891004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110910804Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110947503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110983003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111002703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111019702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111038702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111054502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111096201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111137101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111175900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111189300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111205600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111236299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111251899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111274399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111294599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111330498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111345998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111376797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111394397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111421297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111457096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111535995Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111594594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111638794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111653394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111706593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111722293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111736992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111747992Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112862377Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112947276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113020375Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113041274Z" level=info msg="containerd successfully booted in 0.036788s"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.102172085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.122803299Z" level=info msg="Loading containers: start."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.249728942Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.363421569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.454819504Z" level=info msg="Loading containers: done."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478314979Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478440677Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523349955Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:17 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523496853Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:26 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.403414153Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.404940232Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405762121Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405911219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405963218Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:27 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:27 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:27 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:27 functional-264400 dockerd[1439]: time="2024-07-21T23:48:27.488211040Z" level=info msg="Starting up"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.283164837Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.284334421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1445
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.322546392Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.353969657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354127155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354245353Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354279453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354386052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354424751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354988043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355091642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355116141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355128941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355204740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355558335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359334983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359441882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359612579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359749577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359878975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359993174Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360138772Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360266770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360289170Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360306770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360434168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360490167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360944161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361072859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361207757Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361229957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361245657Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361275356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361389255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361429254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361568652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361594052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361609452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361622451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361656951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361680651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361901447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361999446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362019946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362046645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362061445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362075845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362092245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362111045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362124244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362136944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362154644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362178044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362192043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362211643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362342741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362390341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362406041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362418640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362429040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362444140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362455640Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362742536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362893434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362971133Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362995232Z" level=info msg="containerd successfully booted in 0.041146s"
	Jul 21 23:48:29 functional-264400 dockerd[1439]: time="2024-07-21T23:48:29.329544955Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.660319456Z" level=info msg="Loading containers: start."
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.796232675Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.907798631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.001144539Z" level=info msg="Loading containers: done."
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022589743Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022719941Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067087927Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067159926Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:33 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203705562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203993309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204174339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204501992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275055860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275220587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275259793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275372211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333574371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333646683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333744099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333850816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416645674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416770094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416839505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.417133553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625603538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625875582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625899586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.626009704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776176512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776370643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776546172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.835904420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836147160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836225472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836649541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887079538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887333179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887543914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887899671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.134772975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141087657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141198860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141750876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576099088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576165990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576179490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576332795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.700943823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701110428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701133028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701305233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.251787691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252007895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252034496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252193199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.458949480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459063270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459134864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459296351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733493277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733949139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734221216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734462295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:50:33 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.409481815Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.656026383Z" level=info msg="ignoring event" container=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.657306959Z" level=info msg="shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658560636Z" level=warning msg="cleaning up after shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658678934Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.676709403Z" level=info msg="ignoring event" container=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.677164894Z" level=info msg="shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678209575Z" level=warning msg="cleaning up after shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678304373Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.695081165Z" level=info msg="ignoring event" container=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695302161Z" level=info msg="shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695385859Z" level=warning msg="cleaning up after shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695446458Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.701015856Z" level=info msg="ignoring event" container=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.702594427Z" level=info msg="shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704149698Z" level=warning msg="cleaning up after shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704221897Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.728693847Z" level=info msg="shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729328035Z" level=info msg="ignoring event" container=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729433134Z" level=info msg="ignoring event" container=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.731072903Z" level=warning msg="cleaning up after shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.734341743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.734844834Z" level=info msg="ignoring event" container=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.735006831Z" level=info msg="ignoring event" container=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.743975166Z" level=info msg="shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744093164Z" level=warning msg="cleaning up after shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744205762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.730359917Z" level=info msg="shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751792823Z" level=warning msg="cleaning up after shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751834022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759660178Z" level=info msg="ignoring event" container=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759862574Z" level=info msg="ignoring event" container=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760069570Z" level=info msg="ignoring event" container=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760281966Z" level=info msg="ignoring event" container=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760277567Z" level=info msg="shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760380865Z" level=warning msg="cleaning up after shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760394364Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.748823577Z" level=info msg="shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765443172Z" level=warning msg="cleaning up after shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765461071Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769325900Z" level=info msg="shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769546096Z" level=warning msg="cleaning up after shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769827691Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774921997Z" level=info msg="shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774984396Z" level=warning msg="cleaning up after shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774997396Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788278152Z" level=info msg="shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788393450Z" level=warning msg="cleaning up after shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788444649Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.846647379Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1439]: time="2024-07-21T23:50:38.541510181Z" level=info msg="ignoring event" container=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.544122633Z" level=info msg="shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545450508Z" level=warning msg="cleaning up after shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545830901Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.461769452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.504142282Z" level=info msg="ignoring event" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504338210Z" level=info msg="shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504430323Z" level=warning msg="cleaning up after shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504443725Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.578959353Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579851478Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579966294Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.580111114Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:50:44 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Consumed 5.235s CPU time.
	Jul 21 23:50:44 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:50:44 functional-264400 dockerd[4061]: time="2024-07-21T23:50:44.647231378Z" level=info msg="Starting up"
	Jul 21 23:51:44 functional-264400 dockerd[4061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 21 23:51:44 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0721 23:51:44.740966    3296 out.go:239] * 
	W0721 23:51:44.742865    3296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 23:51:44.751682    3296 out.go:177] 
	
	
	==> Docker <==
	Jul 22 00:11:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:11:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID '46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd'"
	Jul 22 00:11:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:11:49Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 22 00:11:49 functional-264400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jul 22 00:11:49 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 00:11:49 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 00:11:49 functional-264400 dockerd[9222]: time="2024-07-22T00:11:49.830131304Z" level=info msg="Starting up"
	Jul 22 00:12:49 functional-264400 dockerd[9222]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="error getting RW layer size for container ID 'd3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf'"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="error getting RW layer size for container ID '62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID '62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789'"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="error getting RW layer size for container ID '6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084'"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="error getting RW layer size for container ID 'fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:12:49 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab'"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="error getting RW layer size for container ID '74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID '74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559'"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="error getting RW layer size for container ID '46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID '46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd'"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="error getting RW layer size for container ID 'c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:12:49 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:12:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5'"
	Jul 22 00:12:49 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 00:12:49 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-22T00:12:49Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul21 23:48] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.098748] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.521494] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.200309] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.246957] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +2.856365] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.199871] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.217213] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.265319] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[  +7.860794] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[  +0.119892] kauditd_printk_skb: 202 callbacks suppressed
	[  +6.328518] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.744596] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.374282] systemd-fstab-generator[1877]: Ignoring "noauto" option for root device
	[  +0.101703] kauditd_printk_skb: 48 callbacks suppressed
	[  +8.037046] systemd-fstab-generator[2275]: Ignoring "noauto" option for root device
	[  +0.135108] kauditd_printk_skb: 62 callbacks suppressed
	[Jul21 23:49] systemd-fstab-generator[2503]: Ignoring "noauto" option for root device
	[  +0.181805] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.421058] kauditd_printk_skb: 71 callbacks suppressed
	[Jul21 23:50] systemd-fstab-generator[3581]: Ignoring "noauto" option for root device
	[  +0.640674] systemd-fstab-generator[3616]: Ignoring "noauto" option for root device
	[  +0.278009] systemd-fstab-generator[3628]: Ignoring "noauto" option for root device
	[  +0.318270] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[  +5.355152] kauditd_printk_skb: 91 callbacks suppressed
	
	
	==> kernel <==
	 00:13:50 up 27 min,  0 users,  load average: 0.01, 0.02, 0.00
	Linux functional-264400 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 22 00:13:48 functional-264400 kubelet[2282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:13:48 functional-264400 kubelet[2282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:13:48 functional-264400 kubelet[2282]: E0722 00:13:48.590673    2282 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-264400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-264400?resourceVersion=0&timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused"
	Jul 22 00:13:48 functional-264400 kubelet[2282]: E0722 00:13:48.591821    2282 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-264400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused"
	Jul 22 00:13:48 functional-264400 kubelet[2282]: E0722 00:13:48.593076    2282 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-264400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused"
	Jul 22 00:13:48 functional-264400 kubelet[2282]: E0722 00:13:48.593948    2282 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-264400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused"
	Jul 22 00:13:48 functional-264400 kubelet[2282]: E0722 00:13:48.595238    2282 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-264400\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused"
	Jul 22 00:13:48 functional-264400 kubelet[2282]: E0722 00:13:48.595335    2282 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.069934    2282 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.070418    2282 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.070500    2282 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.075075    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.075120    2282 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.075360    2282 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.075516    2282 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: I0722 00:13:50.075559    2282 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.076537    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.077368    2282 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.078321    2282 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.078713    2282 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.079299    2282 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.081525    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.081640    2282 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.081897    2282 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Jul 22 00:13:50 functional-264400 kubelet[2282]: E0722 00:13:50.179774    2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.28.193.97:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-functional-264400.17e45f642dd4f12c  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-functional-264400,UID:989376ba212faad8bb0877aaf59fcbbc,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-264400,},FirstTimestamp:2024-07-21 23:50:41.432670508 +0000 UTC m=+113.302298376,LastTimestamp:2024-07-21 23:50:41.43267050
8 +0000 UTC m=+113.302298376,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-264400,}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:11:25.261954    5028 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0722 00:11:49.548842    5028 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:11:49.584597    5028 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:11:49.618469    5028 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:11:49.648601    5028 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:11:49.681480    5028 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:11:49.710989    5028 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:11:49.744059    5028 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:12:49.850226    5028 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-264400 -n functional-264400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-264400 -n functional-264400: exit status 2 (12.7330226s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:13:50.834115    1232 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-264400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (181.24s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (120.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-264400 -n functional-264400
E0722 00:14:11.874999    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-264400 -n functional-264400: exit status 2 (12.4259489s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:14:03.579112    5144 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 logs -n 25: (1m34.9739029s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:43 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:44 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-420400 --log_dir                                     | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-420400                                            | nospam-420400     | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:45 UTC |
	| start   | -p functional-264400                                        | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:45 UTC | 21 Jul 24 23:49 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-264400                                        | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:49 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache add                                 | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:57 UTC | 21 Jul 24 23:59 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache add                                 | functional-264400 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:59 UTC | 22 Jul 24 00:01 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache add                                 | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:01 UTC | 22 Jul 24 00:03 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache add                                 | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:03 UTC | 22 Jul 24 00:04 UTC |
	|         | minikube-local-cache-test:functional-264400                 |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache delete                              | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:04 UTC | 22 Jul 24 00:04 UTC |
	|         | minikube-local-cache-test:functional-264400                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:04 UTC | 22 Jul 24 00:04 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:04 UTC | 22 Jul 24 00:04 UTC |
	| ssh     | functional-264400 ssh sudo                                  | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:04 UTC |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-264400                                           | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:05 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-264400 ssh                                       | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:05 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-264400 cache reload                              | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:05 UTC | 22 Jul 24 00:07 UTC |
	| ssh     | functional-264400 ssh                                       | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:07 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:08 UTC | 22 Jul 24 00:08 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:08 UTC | 22 Jul 24 00:08 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-264400 kubectl --                                | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:11 UTC |                     |
	|         | --context functional-264400                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:49:13
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:49:13.243734    3296 out.go:291] Setting OutFile to fd 632 ...
	I0721 23:49:13.245089    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:49:13.245089    3296 out.go:304] Setting ErrFile to fd 612...
	I0721 23:49:13.245225    3296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:49:13.271799    3296 out.go:298] Setting JSON to false
	I0721 23:49:13.274576    3296 start.go:129] hostinfo: {"hostname":"minikube6","uptime":120960,"bootTime":1721484792,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0721 23:49:13.275656    3296 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 23:49:13.279846    3296 out.go:177] * [functional-264400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0721 23:49:13.284743    3296 notify.go:220] Checking for updates...
	I0721 23:49:13.286577    3296 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0721 23:49:13.288761    3296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 23:49:13.292203    3296 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0721 23:49:13.295335    3296 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 23:49:13.299523    3296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 23:49:13.304221    3296 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:49:13.304533    3296 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:49:18.732833    3296 out.go:177] * Using the hyperv driver based on existing profile
	I0721 23:49:18.737459    3296 start.go:297] selected driver: hyperv
	I0721 23:49:18.737459    3296 start.go:901] validating driver "hyperv" against &{Name:functional-264400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-264400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.97 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:49:18.737459    3296 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 23:49:18.788011    3296 cni.go:84] Creating CNI manager for ""
	I0721 23:49:18.788078    3296 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 23:49:18.788279    3296 start.go:340] cluster config:
	{Name:functional-264400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-264400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.97 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:49:18.788712    3296 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:49:18.793652    3296 out.go:177] * Starting "functional-264400" primary control-plane node in "functional-264400" cluster
	I0721 23:49:18.796373    3296 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 23:49:18.796558    3296 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 23:49:18.796558    3296 cache.go:56] Caching tarball of preloaded images
	I0721 23:49:18.796558    3296 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 23:49:18.796558    3296 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 23:49:18.797352    3296 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\config.json ...
	I0721 23:49:18.798980    3296 start.go:360] acquireMachinesLock for functional-264400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0721 23:49:18.798980    3296 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-264400"
	I0721 23:49:18.798980    3296 start.go:96] Skipping create...Using existing machine configuration
	I0721 23:49:18.799979    3296 fix.go:54] fixHost starting: 
	I0721 23:49:18.799979    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:21.623488    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:21.623488    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:21.623488    3296 fix.go:112] recreateIfNeeded on functional-264400: state=Running err=<nil>
	W0721 23:49:21.623488    3296 fix.go:138] unexpected machine state, will restart: <nil>
	I0721 23:49:21.629305    3296 out.go:177] * Updating the running hyperv "functional-264400" VM ...
	I0721 23:49:21.631533    3296 machine.go:94] provisionDockerMachine start ...
	I0721 23:49:21.631533    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:23.852288    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:23.852288    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:23.852522    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:26.470028    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:26.470028    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:26.478442    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:26.479167    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:26.479167    3296 main.go:141] libmachine: About to run SSH command:
	hostname
	I0721 23:49:26.622339    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-264400
	
	I0721 23:49:26.622466    3296 buildroot.go:166] provisioning hostname "functional-264400"
	I0721 23:49:26.622607    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:28.824177    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:28.824557    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:28.824557    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:31.414467    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:31.414467    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:31.420319    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:31.421099    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:31.421099    3296 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-264400 && echo "functional-264400" | sudo tee /etc/hostname
	I0721 23:49:31.588774    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-264400
	
	I0721 23:49:31.588774    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:33.790066    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:33.790474    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:33.790474    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:36.394275    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:36.394275    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:36.399837    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:36.400299    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:36.400299    3296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-264400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-264400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-264400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0721 23:49:36.533255    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:49:36.533255    3296 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0721 23:49:36.533255    3296 buildroot.go:174] setting up certificates
	I0721 23:49:36.533255    3296 provision.go:84] configureAuth start
	I0721 23:49:36.533977    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:38.735744    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:38.735744    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:38.736834    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:41.319431    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:41.319431    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:41.319569    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:43.497052    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:46.052701    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:46.052760    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:46.052760    3296 provision.go:143] copyHostCerts
	I0721 23:49:46.052760    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0721 23:49:46.053530    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0721 23:49:46.053530    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0721 23:49:46.054196    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0721 23:49:46.055555    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0721 23:49:46.055555    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0721 23:49:46.055555    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0721 23:49:46.056169    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0721 23:49:46.056925    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0721 23:49:46.057723    3296 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0721 23:49:46.057723    3296 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0721 23:49:46.057723    3296 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0721 23:49:46.059166    3296 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-264400 san=[127.0.0.1 172.28.193.97 functional-264400 localhost minikube]
	I0721 23:49:46.255062    3296 provision.go:177] copyRemoteCerts
	I0721 23:49:46.265961    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0721 23:49:46.265961    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:48.459327    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:48.459802    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:48.459881    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:51.076501    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:51.076501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:51.078136    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:49:51.186062    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.919744s)
	I0721 23:49:51.186137    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0721 23:49:51.186285    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0721 23:49:51.234406    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0721 23:49:51.234628    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0721 23:49:51.286824    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0721 23:49:51.286998    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0721 23:49:51.338595    3296 provision.go:87] duration metric: took 14.8051233s to configureAuth
	I0721 23:49:51.338595    3296 buildroot.go:189] setting minikube options for container-runtime
	I0721 23:49:51.339337    3296 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 23:49:51.339480    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:53.533831    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:53.534329    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:53.534329    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:49:56.137211    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:49:56.137352    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:56.143046    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:49:56.143046    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:49:56.143046    3296 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0721 23:49:56.285359    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0721 23:49:56.285421    3296 buildroot.go:70] root file system type: tmpfs
	I0721 23:49:56.285723    3296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0721 23:49:56.285723    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:49:58.466808    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:49:58.466808    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:49:58.467788    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:01.029845    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:01.030501    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:01.036243    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:01.036485    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:01.036485    3296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0721 23:50:01.200267    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0721 23:50:01.200414    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:03.413139    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:06.025859    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:06.025859    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:06.031850    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:06.032255    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:06.032255    3296 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0721 23:50:06.194379    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0721 23:50:06.194379    3296 machine.go:97] duration metric: took 44.5622996s to provisionDockerMachine
	I0721 23:50:06.194379    3296 start.go:293] postStartSetup for "functional-264400" (driver="hyperv")
	I0721 23:50:06.194379    3296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0721 23:50:06.209650    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0721 23:50:06.209650    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:08.393053    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:08.393053    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:08.393698    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:10.989526    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:10.989526    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:10.989613    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:11.100095    3296 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8903841s)
	I0721 23:50:11.113697    3296 ssh_runner.go:195] Run: cat /etc/os-release
	I0721 23:50:11.120917    3296 command_runner.go:130] > NAME=Buildroot
	I0721 23:50:11.120917    3296 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0721 23:50:11.120917    3296 command_runner.go:130] > ID=buildroot
	I0721 23:50:11.120917    3296 command_runner.go:130] > VERSION_ID=2023.02.9
	I0721 23:50:11.120999    3296 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0721 23:50:11.121050    3296 info.go:137] Remote host: Buildroot 2023.02.9
	I0721 23:50:11.121050    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0721 23:50:11.121510    3296 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0721 23:50:11.122518    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0721 23:50:11.122575    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0721 23:50:11.123543    3296 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts -> hosts in /etc/test/nested/copy/5100
	I0721 23:50:11.123618    3296 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts -> /etc/test/nested/copy/5100/hosts
	I0721 23:50:11.133586    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/5100
	I0721 23:50:11.152687    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0721 23:50:11.202971    3296 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts --> /etc/test/nested/copy/5100/hosts (40 bytes)
	I0721 23:50:11.255289    3296 start.go:296] duration metric: took 5.0608472s for postStartSetup
	I0721 23:50:11.255289    3296 fix.go:56] duration metric: took 52.4546661s for fixHost
	I0721 23:50:11.255289    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:13.434592    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:13.434592    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:13.435305    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:16.055310    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:16.055310    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:16.061461    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:16.061461    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:16.061461    3296 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0721 23:50:16.203294    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721605816.220500350
	
	I0721 23:50:16.203389    3296 fix.go:216] guest clock: 1721605816.220500350
	I0721 23:50:16.203389    3296 fix.go:229] Guest: 2024-07-21 23:50:16.22050035 +0000 UTC Remote: 2024-07-21 23:50:11.2552893 +0000 UTC m=+58.166615301 (delta=4.96521105s)
	I0721 23:50:16.203490    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:18.378670    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:18.378670    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:18.378758    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:21.010405    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:21.010405    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:21.016091    3296 main.go:141] libmachine: Using SSH client type: native
	I0721 23:50:21.016289    3296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.97 22 <nil> <nil>}
	I0721 23:50:21.016289    3296 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721605816
	I0721 23:50:21.170845    3296 main.go:141] libmachine: SSH cmd err, output: <nil>: Sun Jul 21 23:50:16 UTC 2024
	
	I0721 23:50:21.171182    3296 fix.go:236] clock set: Sun Jul 21 23:50:16 UTC 2024
	 (err=<nil>)
	I0721 23:50:21.171182    3296 start.go:83] releasing machines lock for "functional-264400", held for 1m2.3714351s
	I0721 23:50:21.171265    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:23.395806    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:23.395850    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:23.395850    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:26.024178    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:26.024178    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:26.028577    3296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0721 23:50:26.028739    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:26.043523    3296 ssh_runner.go:195] Run: cat /version.json
	I0721 23:50:26.043523    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stdout =====>] : Running
	
	I0721 23:50:28.403715    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:28.404030    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:28.404030    3296 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
	I0721 23:50:31.161323    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:31.161323    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:31.162685    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:31.218457    3296 main.go:141] libmachine: [stdout =====>] : 172.28.193.97
	
	I0721 23:50:31.219166    3296 main.go:141] libmachine: [stderr =====>] : 
	I0721 23:50:31.219224    3296 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
	I0721 23:50:31.264926    3296 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0721 23:50:31.265653    3296 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.236833s)
	W0721 23:50:31.265743    3296 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0721 23:50:31.313118    3296 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0721 23:50:31.313718    3296 ssh_runner.go:235] Completed: cat /version.json: (5.2701285s)
	I0721 23:50:31.326430    3296 ssh_runner.go:195] Run: systemctl --version
	I0721 23:50:31.335559    3296 command_runner.go:130] > systemd 252 (252)
	I0721 23:50:31.335630    3296 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0721 23:50:31.347271    3296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0721 23:50:31.356110    3296 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0721 23:50:31.356110    3296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0721 23:50:31.367122    3296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0721 23:50:31.377018    3296 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0721 23:50:31.377190    3296 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0721 23:50:31.390830    3296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0721 23:50:31.390919    3296 start.go:495] detecting cgroup driver to use...
	I0721 23:50:31.391177    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:50:31.430605    3296 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0721 23:50:31.443163    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0721 23:50:31.473200    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0721 23:50:31.495064    3296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0721 23:50:31.505345    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0721 23:50:31.537330    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:50:31.570237    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0721 23:50:31.603641    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0721 23:50:31.634289    3296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0721 23:50:31.667749    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0721 23:50:31.699347    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0721 23:50:31.728970    3296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0721 23:50:31.758020    3296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0721 23:50:31.777872    3296 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0721 23:50:31.788667    3296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0721 23:50:31.817394    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:50:32.095962    3296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0721 23:50:32.129994    3296 start.go:495] detecting cgroup driver to use...
	I0721 23:50:32.144084    3296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0721 23:50:32.171240    3296 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0721 23:50:32.171513    3296 command_runner.go:130] > [Unit]
	I0721 23:50:32.171513    3296 command_runner.go:130] > Description=Docker Application Container Engine
	I0721 23:50:32.171513    3296 command_runner.go:130] > Documentation=https://docs.docker.com
	I0721 23:50:32.171513    3296 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0721 23:50:32.171513    3296 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0721 23:50:32.171626    3296 command_runner.go:130] > StartLimitBurst=3
	I0721 23:50:32.171626    3296 command_runner.go:130] > StartLimitIntervalSec=60
	I0721 23:50:32.171626    3296 command_runner.go:130] > [Service]
	I0721 23:50:32.171626    3296 command_runner.go:130] > Type=notify
	I0721 23:50:32.171626    3296 command_runner.go:130] > Restart=on-failure
	I0721 23:50:32.171626    3296 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0721 23:50:32.171697    3296 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0721 23:50:32.171697    3296 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0721 23:50:32.171697    3296 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0721 23:50:32.171697    3296 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0721 23:50:32.171697    3296 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0721 23:50:32.171763    3296 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0721 23:50:32.171763    3296 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0721 23:50:32.171763    3296 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0721 23:50:32.171763    3296 command_runner.go:130] > ExecStart=
	I0721 23:50:32.171845    3296 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0721 23:50:32.171845    3296 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0721 23:50:32.171845    3296 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0721 23:50:32.171910    3296 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0721 23:50:32.171935    3296 command_runner.go:130] > LimitNOFILE=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > LimitNPROC=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > LimitCORE=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0721 23:50:32.171968    3296 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0721 23:50:32.171968    3296 command_runner.go:130] > TasksMax=infinity
	I0721 23:50:32.171968    3296 command_runner.go:130] > TimeoutStartSec=0
	I0721 23:50:32.171968    3296 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0721 23:50:32.171968    3296 command_runner.go:130] > Delegate=yes
	I0721 23:50:32.171968    3296 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0721 23:50:32.171968    3296 command_runner.go:130] > KillMode=process
	I0721 23:50:32.171968    3296 command_runner.go:130] > [Install]
	I0721 23:50:32.171968    3296 command_runner.go:130] > WantedBy=multi-user.target
	I0721 23:50:32.185323    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:50:32.222308    3296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0721 23:50:32.269127    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0721 23:50:32.308679    3296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0721 23:50:32.337720    3296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0721 23:50:32.373754    3296 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0721 23:50:32.387559    3296 ssh_runner.go:195] Run: which cri-dockerd
	I0721 23:50:32.393567    3296 command_runner.go:130] > /usr/bin/cri-dockerd
	I0721 23:50:32.407091    3296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0721 23:50:32.429103    3296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0721 23:50:32.473119    3296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0721 23:50:32.747171    3296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0721 23:50:32.998956    3296 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0721 23:50:32.999296    3296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0721 23:50:33.051719    3296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0721 23:50:33.356719    3296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0721 23:51:44.652209    3296 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0721 23:51:44.652633    3296 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0721 23:51:44.654236    3296 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.2965818s)
	I0721 23:51:44.666167    3296 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.168118118Z" level=info msg="Starting up"
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.169181481Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.170711772Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.204506281Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239101537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239202743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239269947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239286548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239363452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239504161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239689572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239796878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239818179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239829580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240023691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240532022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243523700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243618405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244010128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244130936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244288745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.697516    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244514558Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274608247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274731654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274757156Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274774157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274806859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.698112    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.698341    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275350391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.698341    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275567104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275667010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275688011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.698446    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275707112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275721313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275742514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698521    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275764116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275780417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275794017Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698596    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275807418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698670    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275819619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275876422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698744    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275890923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698817    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275939726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698817    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275958027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275970928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275983929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275997230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276018931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698890    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276036232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276049233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276066634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276084135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.698975    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276105336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699059    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276119437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699059    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276132038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276357651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276454457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.699113    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276513660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699204    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276580764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.699204    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276655869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.699260    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276712372Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.699260    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276762075Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277188900Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277433015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277589224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.699289    3296 command_runner.go:130] > Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.278054352Z" level=info msg="containerd successfully booted in 0.074903s"
	I0721 23:51:44.699388    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.247751721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.277834397Z" level=info msg="Loading containers: start."
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.441509517Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.699409    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.655815314Z" level=info msg="Loading containers: done."
	I0721 23:51:44.699472    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676595884Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.699498    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676745891Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.699498    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788327964Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788443669Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:47:43 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.978875672Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980386251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980770345Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980878444Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980936643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:15 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.044964117Z" level=info msg="Starting up"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.046051302Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.047547081Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1095
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.077138071Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103738503Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103854902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103894101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.699662    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103909101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103931301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103942600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104085398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104215897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104236396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104246796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104289796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700183    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104467393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108266041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108366439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108599936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.700413    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108922331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109041730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109088329Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109284326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109335126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.700556    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109351726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109365825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109378125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109446524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110271513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.700667    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110431611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110840005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110866105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110891004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110910804Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700760    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110947503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700871    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110983003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700871    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111002703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111019702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111038702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111054502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.700987    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111096201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111137101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111175900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111189300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701088    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111205600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111236299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111251899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111274399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111294599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701188    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111330498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111345998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111376797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111394397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111421297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701287    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111457096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111535995Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111594594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111638794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111653394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.701386    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111706593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111722293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111736992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111747992Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.701489    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112862377Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112947276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113020375Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113041274Z" level=info msg="containerd successfully booted in 0.036788s"
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.102172085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.701587    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.122803299Z" level=info msg="Loading containers: start."
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.249728942Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.363421569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0721 23:51:44.701680    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.454819504Z" level=info msg="Loading containers: done."
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478314979Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478440677Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523349955Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.701758    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.701834    3296 command_runner.go:130] > Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523496853Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.403414153Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.404940232Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.701852    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405762121Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405911219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405963218Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.701949    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.702027    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:27 functional-264400 dockerd[1439]: time="2024-07-21T23:48:27.488211040Z" level=info msg="Starting up"
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.283164837Z" level=info msg="containerd not running, starting managed containerd"
	I0721 23:51:44.702140    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.284334421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1445
	I0721 23:51:44.702206    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.322546392Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.353969657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354127155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0721 23:51:44.702228    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354245353Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0721 23:51:44.702289    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354279453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354386052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354424751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702310    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354988043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355091642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355116141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355128941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355204740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702399    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355558335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359334983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359441882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0721 23:51:44.702494    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359612579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359749577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359878975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359993174Z" level=info msg="metadata content store policy set" policy=shared
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360138772Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0721 23:51:44.702588    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360266770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0721 23:51:44.702688    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360289170Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360306770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360434168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360490167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360944161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361072859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361207757Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361229957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361245657Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361275356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361389255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361429254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361568652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361594052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361609452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361622451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361656951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361680651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361901447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.702714    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361999446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362019946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362046645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362061445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362075845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703261    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362092245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362111045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362124244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362136944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362154644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362178044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703392    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362192043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362211643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362342741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362390341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0721 23:51:44.703511    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362406041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362418640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362429040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0721 23:51:44.703608    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362444140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0721 23:51:44.703684    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362455640Z" level=info msg="NRI interface is disabled by configuration."
	I0721 23:51:44.703715    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362742536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0721 23:51:44.703715    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362893434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362971133Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362995232Z" level=info msg="containerd successfully booted in 0.041146s"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:29 functional-264400 dockerd[1439]: time="2024-07-21T23:48:29.329544955Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.660319456Z" level=info msg="Loading containers: start."
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.796232675Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.907798631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.001144539Z" level=info msg="Loading containers: done."
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022589743Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022719941Z" level=info msg="Daemon has completed initialization"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067087927Z" level=info msg="API listen on /var/run/docker.sock"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067159926Z" level=info msg="API listen on [::]:2376"
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:33 functional-264400 systemd[1]: Started Docker Application Container Engine.
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203705562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203993309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204174339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204501992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275055860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275220587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275259793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275372211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333574371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333646683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333744099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.703749    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333850816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416645674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416770094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416839505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.417133553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625603538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704284    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625875582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704442    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625899586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704622    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.626009704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704779    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776176512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776370643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776546172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.835904420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836147160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836225472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836649541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887079538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887333179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887543914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887899671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.134772975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141087657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141198860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141750876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576099088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576165990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.704820    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576179490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576332795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.700943823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701110428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705352    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701133028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705465    3296 command_runner.go:130] > Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701305233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705465    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.251787691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705516    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252007895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705516    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252034496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252193199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.458949480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459063270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459134864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459296351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733493277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733949139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734221216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734462295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.409481815Z" level=info msg="Processing signal 'terminated'"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.656026383Z" level=info msg="ignoring event" container=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.657306959Z" level=info msg="shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658560636Z" level=warning msg="cleaning up after shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658678934Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.676709403Z" level=info msg="ignoring event" container=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.677164894Z" level=info msg="shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678209575Z" level=warning msg="cleaning up after shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	I0721 23:51:44.705569    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678304373Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.695081165Z" level=info msg="ignoring event" container=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695302161Z" level=info msg="shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695385859Z" level=warning msg="cleaning up after shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695446458Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.701015856Z" level=info msg="ignoring event" container=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.702594427Z" level=info msg="shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	I0721 23:51:44.706098    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704149698Z" level=warning msg="cleaning up after shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	I0721 23:51:44.706258    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704221897Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.728693847Z" level=info msg="shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729328035Z" level=info msg="ignoring event" container=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729433134Z" level=info msg="ignoring event" container=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.731072903Z" level=warning msg="cleaning up after shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.734341743Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.734844834Z" level=info msg="ignoring event" container=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.735006831Z" level=info msg="ignoring event" container=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.743975166Z" level=info msg="shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744093164Z" level=warning msg="cleaning up after shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744205762Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.730359917Z" level=info msg="shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751792823Z" level=warning msg="cleaning up after shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751834022Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759660178Z" level=info msg="ignoring event" container=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759862574Z" level=info msg="ignoring event" container=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760069570Z" level=info msg="ignoring event" container=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760281966Z" level=info msg="ignoring event" container=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.706282    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760277567Z" level=info msg="shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760380865Z" level=warning msg="cleaning up after shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760394364Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.748823577Z" level=info msg="shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765443172Z" level=warning msg="cleaning up after shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765461071Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769325900Z" level=info msg="shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769546096Z" level=warning msg="cleaning up after shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	I0721 23:51:44.706800    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769827691Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.706951    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774921997Z" level=info msg="shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	I0721 23:51:44.706951    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774984396Z" level=warning msg="cleaning up after shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	I0721 23:51:44.706995    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774997396Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788278152Z" level=info msg="shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788393450Z" level=warning msg="cleaning up after shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788444649Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.846647379Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1439]: time="2024-07-21T23:50:38.541510181Z" level=info msg="ignoring event" container=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.544122633Z" level=info msg="shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545450508Z" level=warning msg="cleaning up after shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545830901Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.461769452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.504142282Z" level=info msg="ignoring event" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504338210Z" level=info msg="shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504430323Z" level=warning msg="cleaning up after shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504443725Z" level=info msg="cleaning up dead shim" namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.578959353Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579851478Z" level=info msg="Daemon shutdown complete"
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579966294Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.580111114Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Consumed 5.235s CPU time.
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	I0721 23:51:44.707026    3296 command_runner.go:130] > Jul 21 23:50:44 functional-264400 dockerd[4061]: time="2024-07-21T23:50:44.647231378Z" level=info msg="Starting up"
	I0721 23:51:44.707551    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 dockerd[4061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0721 23:51:44.707551    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0721 23:51:44.707593    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0721 23:51:44.707593    3296 command_runner.go:130] > Jul 21 23:51:44 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	I0721 23:51:44.736086    3296 out.go:177] 
	W0721 23:51:44.740389    3296 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 21 23:47:42 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.168118118Z" level=info msg="Starting up"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.169181481Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:47:42 functional-264400 dockerd[672]: time="2024-07-21T23:47:42.170711772Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.204506281Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239101537Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239202743Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239269947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239286548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239363452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239504161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239689572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239796878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239818179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.239829580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240023691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.240532022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243523700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.243618405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244010128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244130936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244288745Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.244514558Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274608247Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274731654Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274757156Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274774157Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.274806859Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275036072Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275350391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275567104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275667010Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275688011Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275707112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275721313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275742514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275764116Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275780417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275794017Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275807418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275819619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275861822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275876422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275890923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275939726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275958027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275970928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275983929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.275997230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276018931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276036232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276049233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276066634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276084135Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276105336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276119437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276132038Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276357651Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276454457Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276513660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276580764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276655869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276712372Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.276762075Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277188900Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277433015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.277589224Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:47:42 functional-264400 dockerd[679]: time="2024-07-21T23:47:42.278054352Z" level=info msg="containerd successfully booted in 0.074903s"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.247751721Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.277834397Z" level=info msg="Loading containers: start."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.441509517Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.655815314Z" level=info msg="Loading containers: done."
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676595884Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.676745891Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788327964Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:47:43 functional-264400 dockerd[672]: time="2024-07-21T23:47:43.788443669Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:47:43 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.978875672Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:14 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980386251Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980770345Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980878444Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:14 functional-264400 dockerd[672]: time="2024-07-21T23:48:14.980936643Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:15 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:15 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:15 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.044964117Z" level=info msg="Starting up"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.046051302Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:16 functional-264400 dockerd[1088]: time="2024-07-21T23:48:16.047547081Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1095
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.077138071Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103738503Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103854902Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103894101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103909101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103931301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.103942600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104085398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104215897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104236396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104246796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104289796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.104467393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108266041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108366439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108599936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.108922331Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109041730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109088329Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109284326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109335126Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109351726Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109365825Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109378125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.109446524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110271513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110431611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110840005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110866105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110891004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110910804Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110947503Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.110983003Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111002703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111019702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111038702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111054502Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111096201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111137101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111158800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111175900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111189300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111205600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111236299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111251899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111274399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111294599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111330498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111345998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111376797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111394397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111421297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111457096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111535995Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111594594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111638794Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111653394Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111706593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111722293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111736992Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.111747992Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112862377Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.112947276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113020375Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:16 functional-264400 dockerd[1095]: time="2024-07-21T23:48:16.113041274Z" level=info msg="containerd successfully booted in 0.036788s"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.102172085Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.122803299Z" level=info msg="Loading containers: start."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.249728942Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.363421569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.454819504Z" level=info msg="Loading containers: done."
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478314979Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.478440677Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523349955Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:17 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:17 functional-264400 dockerd[1088]: time="2024-07-21T23:48:17.523496853Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:26 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.403414153Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.404940232Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405762121Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405911219Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:48:26 functional-264400 dockerd[1088]: time="2024-07-21T23:48:26.405963218Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:48:27 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:48:27 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:48:27 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:48:27 functional-264400 dockerd[1439]: time="2024-07-21T23:48:27.488211040Z" level=info msg="Starting up"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.283164837Z" level=info msg="containerd not running, starting managed containerd"
	Jul 21 23:48:28 functional-264400 dockerd[1439]: time="2024-07-21T23:48:28.284334421Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1445
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.322546392Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.353969657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354127155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354245353Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354279453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354386052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354424751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.354988043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355091642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355116141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355128941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355204740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.355558335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359334983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359441882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359612579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359749577Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359878975Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.359993174Z" level=info msg="metadata content store policy set" policy=shared
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360138772Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360266770Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360289170Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360306770Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360434168Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360490167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.360944161Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361072859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361207757Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361229957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361245657Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361275356Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361389255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361429254Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361568652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361594052Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361609452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361622451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361656951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361680651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361901447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.361999446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362019946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362033446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362046645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362061445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362075845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362092245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362111045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362124244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362136944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362154644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362178044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362192043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362211643Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362342741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362390341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362406041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362418640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362429040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362444140Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362455640Z" level=info msg="NRI interface is disabled by configuration."
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362742536Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362893434Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362971133Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 21 23:48:28 functional-264400 dockerd[1445]: time="2024-07-21T23:48:28.362995232Z" level=info msg="containerd successfully booted in 0.041146s"
	Jul 21 23:48:29 functional-264400 dockerd[1439]: time="2024-07-21T23:48:29.329544955Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.660319456Z" level=info msg="Loading containers: start."
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.796232675Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 21 23:48:32 functional-264400 dockerd[1439]: time="2024-07-21T23:48:32.907798631Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.001144539Z" level=info msg="Loading containers: done."
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022589743Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.022719941Z" level=info msg="Daemon has completed initialization"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067087927Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 21 23:48:33 functional-264400 dockerd[1439]: time="2024-07-21T23:48:33.067159926Z" level=info msg="API listen on [::]:2376"
	Jul 21 23:48:33 functional-264400 systemd[1]: Started Docker Application Container Engine.
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203705562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.203993309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204174339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.204501992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275055860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275220587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275259793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.275372211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333574371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333646683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333744099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.333850816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416645674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416770094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.416839505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.417133553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625603538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625875582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.625899586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.626009704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776176512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776348840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776370643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.776546172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.835904420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836147160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836225472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.836649541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887079538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887333179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887543914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:48:41 functional-264400 dockerd[1445]: time="2024-07-21T23:48:41.887899671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.134772975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141087657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141198860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.141750876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576099088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576165990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576179490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:03 functional-264400 dockerd[1445]: time="2024-07-21T23:49:03.576332795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.700943823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701110428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701133028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:04 functional-264400 dockerd[1445]: time="2024-07-21T23:49:04.701305233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.251787691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252007895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252034496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:05 functional-264400 dockerd[1445]: time="2024-07-21T23:49:05.252193199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.458949480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459063270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459134864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.459296351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733493277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.733949139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734221216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:49:11 functional-264400 dockerd[1445]: time="2024-07-21T23:49:11.734462295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 21 23:50:33 functional-264400 systemd[1]: Stopping Docker Application Container Engine...
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.409481815Z" level=info msg="Processing signal 'terminated'"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.656026383Z" level=info msg="ignoring event" container=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.657306959Z" level=info msg="shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658560636Z" level=warning msg="cleaning up after shim disconnected" id=62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.658678934Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.676709403Z" level=info msg="ignoring event" container=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.677164894Z" level=info msg="shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678209575Z" level=warning msg="cleaning up after shim disconnected" id=17bf87600a16dc8eeeb6b9cddb94abca6db3ee2553fc559f238ec13235004952 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.678304373Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.695081165Z" level=info msg="ignoring event" container=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695302161Z" level=info msg="shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695385859Z" level=warning msg="cleaning up after shim disconnected" id=46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.695446458Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.701015856Z" level=info msg="ignoring event" container=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.702594427Z" level=info msg="shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704149698Z" level=warning msg="cleaning up after shim disconnected" id=0b1a8368da44eb65791f98c2cd8d2aab392acf585703af8eb584b51a7ec47330 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.704221897Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.728693847Z" level=info msg="shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729328035Z" level=info msg="ignoring event" container=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.729433134Z" level=info msg="ignoring event" container=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.731072903Z" level=warning msg="cleaning up after shim disconnected" id=ced28c6413687595e7271f3d825fe55c25b4cc72fd3a91c1b4ff10c8bf4e9a16 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.734341743Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.734844834Z" level=info msg="ignoring event" container=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.735006831Z" level=info msg="ignoring event" container=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.743975166Z" level=info msg="shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744093164Z" level=warning msg="cleaning up after shim disconnected" id=c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.744205762Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.730359917Z" level=info msg="shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751792823Z" level=warning msg="cleaning up after shim disconnected" id=fe1d9f7b0dda523a1aed262022c8f1ae0f4b31a2183abbbb2050bfd9eac9281b namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.751834022Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759660178Z" level=info msg="ignoring event" container=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.759862574Z" level=info msg="ignoring event" container=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760069570Z" level=info msg="ignoring event" container=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1439]: time="2024-07-21T23:50:33.760281966Z" level=info msg="ignoring event" container=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760277567Z" level=info msg="shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760380865Z" level=warning msg="cleaning up after shim disconnected" id=d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.760394364Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.748823577Z" level=info msg="shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765443172Z" level=warning msg="cleaning up after shim disconnected" id=91e4841af6b5d90d1a4fc7579cc239d172979eb75e71dab4efab7cd37f3e2c50 namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.765461071Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769325900Z" level=info msg="shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769546096Z" level=warning msg="cleaning up after shim disconnected" id=fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.769827691Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774921997Z" level=info msg="shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774984396Z" level=warning msg="cleaning up after shim disconnected" id=c04f24c54fb7dcefe476415e1fa62039366d1ba21d8c43b782fc3a0e1181d0ac namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.774997396Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788278152Z" level=info msg="shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788393450Z" level=warning msg="cleaning up after shim disconnected" id=a40f34bfc284abb78d6344c78afc6285412c4b8797a118827c09b9d4c6b46bbe namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.788444649Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:33 functional-264400 dockerd[1445]: time="2024-07-21T23:50:33.846647379Z" level=warning msg="cleanup warnings time=\"2024-07-21T23:50:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1439]: time="2024-07-21T23:50:38.541510181Z" level=info msg="ignoring event" container=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.544122633Z" level=info msg="shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545450508Z" level=warning msg="cleaning up after shim disconnected" id=74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559 namespace=moby
	Jul 21 23:50:38 functional-264400 dockerd[1445]: time="2024-07-21T23:50:38.545830901Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.461769452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.504142282Z" level=info msg="ignoring event" container=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504338210Z" level=info msg="shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504430323Z" level=warning msg="cleaning up after shim disconnected" id=6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084 namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1445]: time="2024-07-21T23:50:43.504443725Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.578959353Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579851478Z" level=info msg="Daemon shutdown complete"
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.579966294Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 21 23:50:43 functional-264400 dockerd[1439]: time="2024-07-21T23:50:43.580111114Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Deactivated successfully.
	Jul 21 23:50:44 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 21 23:50:44 functional-264400 systemd[1]: docker.service: Consumed 5.235s CPU time.
	Jul 21 23:50:44 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 21 23:50:44 functional-264400 dockerd[4061]: time="2024-07-21T23:50:44.647231378Z" level=info msg="Starting up"
	Jul 21 23:51:44 functional-264400 dockerd[4061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 21 23:51:44 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 21 23:51:44 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0721 23:51:44.740966    3296 out.go:239] * 
	W0721 23:51:44.742865    3296 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 23:51:44.751682    3296 out.go:177] 
	
	
	==> Docker <==
	Jul 22 00:13:50 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 00:13:50 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 00:13:50 functional-264400 dockerd[9649]: time="2024-07-22T00:13:50.268377153Z" level=info msg="Starting up"
	Jul 22 00:14:50 functional-264400 dockerd[9649]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 22 00:14:50 functional-264400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 00:14:50 functional-264400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 00:14:50 functional-264400 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="error getting RW layer size for container ID 'd3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/d3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd3e78755e2723cea8dbb7790cf74ddfe2e3c1f682582e00b28352604f91abeaf'"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="error getting RW layer size for container ID 'fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fc44cc91553217a7d2d606d1366dc35f4507a18e83bf071558e633d9e08b35ab'"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="error getting RW layer size for container ID 'c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c67e5a258e46192f840373dc1cddd45407bced4b6ae56e3c92e534e4081ddee5'"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="error getting RW layer size for container ID '74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="Set backoffDuration to : 1m0s for container ID '74e9b6a037480d2a4132978a7caacadceb1dc3dd89606ef5b5b3908e63cd1559'"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="error getting RW layer size for container ID '6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6feb574568f1be4b7ccbce6a39da8c5df3df878f8e4a225fda9248b05bf45084'"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="error getting RW layer size for container ID '46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="Set backoffDuration to : 1m0s for container ID '46c2dd6045c0c0e361f75ed32bf175c15394f0c5cf5aab57a8f4a554c031d0cd'"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="error getting RW layer size for container ID '62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:14:50 functional-264400 cri-dockerd[1342]: time="2024-07-22T00:14:50Z" level=error msg="Set backoffDuration to : 1m0s for container ID '62030743385ed6ac914c5890a12695976aa9b8bb796ffef657193ce12ba3d789'"
	Jul 22 00:14:50 functional-264400 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jul 22 00:14:50 functional-264400 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 00:14:50 functional-264400 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-22T00:14:52Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul21 23:48] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.098748] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.521494] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.200309] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.246957] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +2.856365] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.199871] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.217213] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.265319] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[  +7.860794] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[  +0.119892] kauditd_printk_skb: 202 callbacks suppressed
	[  +6.328518] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.744596] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.374282] systemd-fstab-generator[1877]: Ignoring "noauto" option for root device
	[  +0.101703] kauditd_printk_skb: 48 callbacks suppressed
	[  +8.037046] systemd-fstab-generator[2275]: Ignoring "noauto" option for root device
	[  +0.135108] kauditd_printk_skb: 62 callbacks suppressed
	[Jul21 23:49] systemd-fstab-generator[2503]: Ignoring "noauto" option for root device
	[  +0.181805] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.421058] kauditd_printk_skb: 71 callbacks suppressed
	[Jul21 23:50] systemd-fstab-generator[3581]: Ignoring "noauto" option for root device
	[  +0.640674] systemd-fstab-generator[3616]: Ignoring "noauto" option for root device
	[  +0.278009] systemd-fstab-generator[3628]: Ignoring "noauto" option for root device
	[  +0.318270] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[  +5.355152] kauditd_printk_skb: 91 callbacks suppressed
	
	
	==> kernel <==
	 00:15:50 up 29 min,  0 users,  load average: 0.05, 0.02, 0.00
	Linux functional-264400 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 22 00:15:43 functional-264400 kubelet[2282]: E0722 00:15:43.488672    2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-264400.17e45f62791f0602\": dial tcp 172.28.193.97:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-264400.17e45f62791f0602  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-264400,UID:d4a646c87acc77b79c334272b81f6958,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.28.193.97:8441/readyz\": dial tcp 172.28.193.97:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-264400,},FirstTimestamp:2024-07-21 23:50:34.105882114 +0000 UTC m=+105.975509982,LastTimestam
p:2024-07-21 23:50:42.1069953 +0000 UTC m=+113.976623268,Count:10,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-264400,}"
	Jul 22 00:15:45 functional-264400 kubelet[2282]: E0722 00:15:45.077028    2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-264400?timeout=10s\": dial tcp 172.28.193.97:8441: connect: connection refused" interval="7s"
	Jul 22 00:15:46 functional-264400 kubelet[2282]: E0722 00:15:46.241200    2282 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 25m13.569498258s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 22 00:15:48 functional-264400 kubelet[2282]: I0722 00:15:48.333552    2282 status_manager.go:853] "Failed to get status for pod" podUID="d4a646c87acc77b79c334272b81f6958" pod="kube-system/kube-apiserver-functional-264400" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-264400\": dial tcp 172.28.193.97:8441: connect: connection refused"
	Jul 22 00:15:48 functional-264400 kubelet[2282]: E0722 00:15:48.366563    2282 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:15:48 functional-264400 kubelet[2282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:15:48 functional-264400 kubelet[2282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:15:48 functional-264400 kubelet[2282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:15:48 functional-264400 kubelet[2282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.623289    2282 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.623356    2282 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.623431    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.623468    2282 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.623641    2282 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.623671    2282 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: I0722 00:15:50.623684    2282 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.623717    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.624040    2282 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.624104    2282 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.624134    2282 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.624150    2282 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.624215    2282 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.627292    2282 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.627622    2282 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 22 00:15:50 functional-264400 kubelet[2282]: E0722 00:15:50.628610    2282 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:14:15.985327    8564 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0722 00:14:50.286131    8564 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:14:50.318887    8564 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:14:50.349757    8564 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:14:50.378326    8564 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:14:50.406332    8564 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:14:50.434810    8564 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:14:50.466153    8564 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 00:14:50.498670    8564 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-264400 -n functional-264400
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-264400 -n functional-264400: exit status 2 (12.1099088s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:15:51.478964   12760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-264400" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (120.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-264400 config unset cpus" to be -""- but got *"W0722 00:19:32.468468    3348 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 config get cpus: exit status 14 (244.0228ms)

                                                
                                                
** stderr ** 
	W0722 00:19:32.759508    2004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-264400 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0722 00:19:32.759508    2004 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-264400 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0722 00:19:33.003603   11980 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-264400 config get cpus" to be -""- but got *"W0722 00:19:33.288614    2568 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-264400 config unset cpus" to be -""- but got *"W0722 00:19:33.532941    6892 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 config get cpus: exit status 14 (223.1112ms)

                                                
                                                
** stderr ** 
	W0722 00:19:33.784478    4784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-264400 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0722 00:19:33.784478    4784 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 service --namespace=default --https --url hello-node: exit status 1 (15.0413308s)

                                                
                                                
** stderr ** 
	W0722 00:20:16.964411    4056 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-264400 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 service hello-node --url --format={{.IP}}: exit status 1 (15.0695139s)

                                                
                                                
** stderr ** 
	W0722 00:20:32.023159    4844 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-264400 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 service hello-node --url: exit status 1 (15.0184658s)

                                                
                                                
** stderr ** 
	W0722 00:20:47.114782    5756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-264400 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (70.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-7fbtz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-7fbtz -- sh -c "ping -c 1 172.28.192.1"
E0722 00:39:32.378982    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-7fbtz -- sh -c "ping -c 1 172.28.192.1": exit status 1 (10.5190775s)

                                                
                                                
-- stdout --
	PING 172.28.192.1 (172.28.192.1): 56 data bytes
	
	--- 172.28.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:39:23.927653    2296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.192.1) from pod (busybox-fc5497c4f-7fbtz): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-sv6jt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-sv6jt -- sh -c "ping -c 1 172.28.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-sv6jt -- sh -c "ping -c 1 172.28.192.1": exit status 1 (10.5466001s)

                                                
                                                
-- stdout --
	PING 172.28.192.1 (172.28.192.1): 56 data bytes
	
	--- 172.28.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:39:34.983179    4888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.192.1) from pod (busybox-fc5497c4f-sv6jt): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-tdwp8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-tdwp8 -- sh -c "ping -c 1 172.28.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-tdwp8 -- sh -c "ping -c 1 172.28.192.1": exit status 1 (10.5157136s)

                                                
                                                
-- stdout --
	PING 172.28.192.1 (172.28.192.1): 56 data bytes
	
	--- 172.28.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:39:46.046396    7716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.192.1) from pod (busybox-fc5497c4f-tdwp8): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-474700 -n ha-474700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-474700 -n ha-474700: (12.8172407s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 logs -n 25: (9.1881246s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-264400 image build -t     | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:22 UTC | 22 Jul 24 00:23 UTC |
	|         | localhost/my-image:functional-264400 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-264400                    | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:22 UTC | 22 Jul 24 00:23 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-264400 image ls           | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:23 UTC | 22 Jul 24 00:23 UTC |
	| delete  | -p functional-264400                 | functional-264400 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:25 UTC | 22 Jul 24 00:26 UTC |
	| start   | -p ha-474700 --wait=true             | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:26 UTC | 22 Jul 24 00:38 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- apply -f             | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- rollout status       | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- get pods -o          | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- get pods -o          | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-7fbtz --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-sv6jt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-tdwp8 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-7fbtz --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-sv6jt --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-tdwp8 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-7fbtz -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-sv6jt -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-tdwp8 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- get pods -o          | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-7fbtz              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC |                     |
	|         | busybox-fc5497c4f-7fbtz -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.192.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-sv6jt              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC |                     |
	|         | busybox-fc5497c4f-sv6jt -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.192.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC | 22 Jul 24 00:39 UTC |
	|         | busybox-fc5497c4f-tdwp8              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-474700 -- exec                 | ha-474700         | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:39 UTC |                     |
	|         | busybox-fc5497c4f-tdwp8 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.192.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:26:39
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:26:39.221971   13232 out.go:291] Setting OutFile to fd 464 ...
	I0722 00:26:39.223984   13232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:26:39.223984   13232 out.go:304] Setting ErrFile to fd 612...
	I0722 00:26:39.223984   13232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:26:39.245984   13232 out.go:298] Setting JSON to false
	I0722 00:26:39.247973   13232 start.go:129] hostinfo: {"hostname":"minikube6","uptime":123206,"bootTime":1721484792,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0722 00:26:39.248984   13232 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 00:26:39.256984   13232 out.go:177] * [ha-474700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0722 00:26:39.260974   13232 notify.go:220] Checking for updates...
	I0722 00:26:39.260974   13232 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:26:39.263973   13232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:26:39.265972   13232 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0722 00:26:39.268973   13232 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:26:39.271983   13232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:26:39.274973   13232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:26:44.712926   13232 out.go:177] * Using the hyperv driver based on user configuration
	I0722 00:26:44.718207   13232 start.go:297] selected driver: hyperv
	I0722 00:26:44.718207   13232 start.go:901] validating driver "hyperv" against <nil>
	I0722 00:26:44.718207   13232 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:26:44.767662   13232 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 00:26:44.768392   13232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:26:44.768392   13232 cni.go:84] Creating CNI manager for ""
	I0722 00:26:44.768392   13232 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0722 00:26:44.768392   13232 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 00:26:44.769053   13232 start.go:340] cluster config:
	{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:26:44.769053   13232 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:26:44.777816   13232 out.go:177] * Starting "ha-474700" primary control-plane node in "ha-474700" cluster
	I0722 00:26:44.783914   13232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 00:26:44.783914   13232 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0722 00:26:44.783914   13232 cache.go:56] Caching tarball of preloaded images
	I0722 00:26:44.783914   13232 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 00:26:44.783914   13232 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 00:26:44.784940   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:26:44.784940   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json: {Name:mk591e8a86ee287de6657a04867487c561e834a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:26:44.786199   13232 start.go:360] acquireMachinesLock for ha-474700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:26:44.787102   13232 start.go:364] duration metric: took 902.3µs to acquireMachinesLock for "ha-474700"
	I0722 00:26:44.787244   13232 start.go:93] Provisioning new machine with config: &{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:26:44.787244   13232 start.go:125] createHost starting for "" (driver="hyperv")
	I0722 00:26:44.795860   13232 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 00:26:44.795860   13232 start.go:159] libmachine.API.Create for "ha-474700" (driver="hyperv")
	I0722 00:26:44.795860   13232 client.go:168] LocalClient.Create starting
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:26:44.796847   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0722 00:26:46.920893   13232 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0722 00:26:46.920946   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:46.920946   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0722 00:26:48.669298   13232 main.go:141] libmachine: [stdout =====>] : False
	
	I0722 00:26:48.670076   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:48.670076   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:26:50.173577   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:26:50.173666   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:50.173757   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:26:53.798258   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:26:53.798258   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:53.800988   13232 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 00:26:54.254936   13232 main.go:141] libmachine: Creating SSH key...
	I0722 00:26:54.443700   13232 main.go:141] libmachine: Creating VM...
	I0722 00:26:54.443700   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:26:57.305451   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:26:57.305713   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:57.305713   13232 main.go:141] libmachine: Using switch "Default Switch"
	I0722 00:26:57.305851   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:26:59.088860   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:26:59.089142   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:59.089176   13232 main.go:141] libmachine: Creating VHD
	I0722 00:26:59.089176   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0722 00:27:02.923386   13232 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D35E6BF3-D7D2-4C13-8EE5-3CEC4F188D51
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0722 00:27:02.923823   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:02.923823   13232 main.go:141] libmachine: Writing magic tar header
	I0722 00:27:02.924019   13232 main.go:141] libmachine: Writing SSH key tar header
	I0722 00:27:02.935276   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0722 00:27:06.147738   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:06.147738   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:06.148743   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\disk.vhd' -SizeBytes 20000MB
	I0722 00:27:08.709897   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:08.709897   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:08.710839   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-474700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0722 00:27:12.428220   13232 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-474700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0722 00:27:12.428390   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:12.428390   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-474700 -DynamicMemoryEnabled $false
	I0722 00:27:14.732980   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:14.732980   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:14.732980   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-474700 -Count 2
	I0722 00:27:16.938752   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:16.938980   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:16.939107   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-474700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\boot2docker.iso'
	I0722 00:27:19.550560   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:19.551520   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:19.551789   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-474700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\disk.vhd'
	I0722 00:27:22.226866   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:22.226866   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:22.227433   13232 main.go:141] libmachine: Starting VM...
	I0722 00:27:22.227490   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-474700
	I0722 00:27:25.488148   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:25.488148   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:25.488940   13232 main.go:141] libmachine: Waiting for host to start...
	I0722 00:27:25.488940   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:27.825709   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:27.825709   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:27.825709   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:27:30.481873   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:30.482667   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:31.497892   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:33.735810   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:33.736842   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:33.736959   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:27:36.315419   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:36.315818   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:37.316708   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:39.521874   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:39.522409   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:39.522451   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:27:42.052425   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:42.052470   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:43.054279   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:45.305228   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:45.305228   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:45.306085   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:27:47.843247   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:47.843698   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:48.846426   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:51.147932   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:51.147932   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:51.148637   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:27:53.711926   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:27:53.711926   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:53.712832   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:55.883791   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:55.883791   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:55.883791   13232 machine.go:94] provisionDockerMachine start ...
	I0722 00:27:55.884063   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:58.079188   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:58.079188   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:58.079188   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:00.674656   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:00.674708   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:00.680461   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:00.691523   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:00.691523   13232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:28:00.826889   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:28:00.826958   13232 buildroot.go:166] provisioning hostname "ha-474700"
	I0722 00:28:00.827068   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:02.985917   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:02.985917   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:02.986696   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:05.554276   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:05.555309   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:05.560666   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:05.560666   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:05.560666   13232 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-474700 && echo "ha-474700" | sudo tee /etc/hostname
	I0722 00:28:05.738716   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-474700
	
	I0722 00:28:05.739039   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:07.909364   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:07.909364   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:07.910200   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:10.503709   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:10.503709   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:10.509909   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:10.510666   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:10.510666   13232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-474700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-474700/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-474700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:28:10.663807   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:28:10.663893   13232 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0722 00:28:10.663967   13232 buildroot.go:174] setting up certificates
	I0722 00:28:10.663967   13232 provision.go:84] configureAuth start
	I0722 00:28:10.663967   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:12.848991   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:12.849404   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:12.849483   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:15.393747   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:15.394014   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:15.394096   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:17.545945   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:17.546499   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:17.546617   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:20.109287   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:20.110099   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:20.110099   13232 provision.go:143] copyHostCerts
	I0722 00:28:20.110311   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0722 00:28:20.110690   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0722 00:28:20.110690   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0722 00:28:20.110946   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0722 00:28:20.112773   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0722 00:28:20.112928   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0722 00:28:20.112928   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0722 00:28:20.112928   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0722 00:28:20.114199   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0722 00:28:20.114199   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0722 00:28:20.114199   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0722 00:28:20.114939   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0722 00:28:20.116125   13232 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-474700 san=[127.0.0.1 172.28.196.103 ha-474700 localhost minikube]
	I0722 00:28:20.476100   13232 provision.go:177] copyRemoteCerts
	I0722 00:28:20.487026   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:28:20.487026   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:22.712406   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:22.712406   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:22.713351   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:25.285459   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:25.286635   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:25.287250   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:28:25.405119   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9179962s)
	I0722 00:28:25.405119   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0722 00:28:25.405402   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 00:28:25.452581   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0722 00:28:25.453174   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0722 00:28:25.495841   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0722 00:28:25.495979   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:28:25.543350   13232 provision.go:87] duration metric: took 14.8791943s to configureAuth
	I0722 00:28:25.543350   13232 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:28:25.544592   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:28:25.544889   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:27.737730   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:27.737730   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:27.737938   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:30.314881   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:30.314881   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:30.320666   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:30.321233   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:30.321389   13232 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 00:28:30.451894   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 00:28:30.451894   13232 buildroot.go:70] root file system type: tmpfs
	I0722 00:28:30.452092   13232 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 00:28:30.452233   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:32.620772   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:32.621849   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:32.621878   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:35.214956   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:35.215306   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:35.221015   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:35.221227   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:35.221227   13232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 00:28:35.387450   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 00:28:35.387450   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:37.552753   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:37.552983   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:37.553147   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:40.121107   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:40.121969   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:40.127709   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:40.128456   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:40.128515   13232 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 00:28:42.395668   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 00:28:42.395668   13232 machine.go:97] duration metric: took 46.5112859s to provisionDockerMachine
	I0722 00:28:42.395958   13232 client.go:171] duration metric: took 1m57.5983198s to LocalClient.Create
	I0722 00:28:42.395958   13232 start.go:167] duration metric: took 1m57.5986096s to libmachine.API.Create "ha-474700"
	I0722 00:28:42.395958   13232 start.go:293] postStartSetup for "ha-474700" (driver="hyperv")
	I0722 00:28:42.395958   13232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:28:42.408866   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:28:42.408866   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:44.540655   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:44.540655   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:44.540890   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:47.136808   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:47.136808   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:47.137290   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:28:47.246528   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8376007s)
	I0722 00:28:47.257655   13232 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:28:47.267997   13232 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:28:47.267997   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0722 00:28:47.268918   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0722 00:28:47.269687   13232 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0722 00:28:47.269774   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0722 00:28:47.280064   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:28:47.299834   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0722 00:28:47.344490   13232 start.go:296] duration metric: took 4.9484694s for postStartSetup
	I0722 00:28:47.347933   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:49.550285   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:49.550366   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:49.550366   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:52.116017   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:52.116017   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:52.116017   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:28:52.119028   13232 start.go:128] duration metric: took 2m7.3301724s to createHost
	I0722 00:28:52.119028   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:54.284490   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:54.284490   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:54.285100   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:56.919072   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:56.919072   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:56.923853   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:56.924556   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:56.924556   13232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:28:57.060727   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721608137.085529417
	
	I0722 00:28:57.060727   13232 fix.go:216] guest clock: 1721608137.085529417
	I0722 00:28:57.060727   13232 fix.go:229] Guest: 2024-07-22 00:28:57.085529417 +0000 UTC Remote: 2024-07-22 00:28:52.1190285 +0000 UTC m=+133.047205201 (delta=4.966500917s)
	I0722 00:28:57.061261   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:59.241766   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:59.241766   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:59.242527   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:29:01.833461   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:29:01.833461   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:01.841057   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:29:01.841794   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:29:01.841794   13232 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721608137
	I0722 00:29:01.990676   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jul 22 00:28:57 UTC 2024
	
	I0722 00:29:01.990676   13232 fix.go:236] clock set: Mon Jul 22 00:28:57 UTC 2024
	 (err=<nil>)
	I0722 00:29:01.990676   13232 start.go:83] releasing machines lock for "ha-474700", held for 2m17.2018378s
	I0722 00:29:01.991411   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:29:04.193615   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:29:04.193615   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:04.194682   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:29:06.738773   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:29:06.738773   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:06.743755   13232 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0722 00:29:06.743832   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:29:06.753934   13232 ssh_runner.go:195] Run: cat /version.json
	I0722 00:29:06.753934   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:29:09.081599   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:29:09.081816   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:09.081956   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:29:09.086527   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:29:09.086615   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:09.087109   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:29:11.976444   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:29:11.976444   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:11.977012   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:29:12.031940   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:29:12.031940   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:12.032462   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:29:12.084269   13232 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.3404465s)
	W0722 00:29:12.084269   13232 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0722 00:29:12.133393   13232 ssh_runner.go:235] Completed: cat /version.json: (5.3793907s)
	I0722 00:29:12.146809   13232 ssh_runner.go:195] Run: systemctl --version
	I0722 00:29:12.171652   13232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:29:12.182170   13232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:29:12.194405   13232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0722 00:29:12.203129   13232 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0722 00:29:12.203267   13232 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0722 00:29:12.228028   13232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:29:12.228107   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:29:12.228600   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:29:12.286530   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 00:29:12.320219   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 00:29:12.340249   13232 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 00:29:12.354389   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 00:29:12.387792   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:29:12.420997   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 00:29:12.453626   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:29:12.490792   13232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:29:12.531641   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 00:29:12.568035   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 00:29:12.606758   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 00:29:12.642021   13232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:29:12.673455   13232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:29:12.705292   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:12.913449   13232 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 00:29:12.948564   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:29:12.961136   13232 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 00:29:12.998900   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:29:13.033582   13232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:29:13.081223   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:29:13.119935   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:29:13.160434   13232 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 00:29:13.225229   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:29:13.255334   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:29:13.308531   13232 ssh_runner.go:195] Run: which cri-dockerd
	I0722 00:29:13.334391   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 00:29:13.352881   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 00:29:13.400058   13232 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 00:29:13.610798   13232 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 00:29:13.812961   13232 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 00:29:13.813217   13232 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 00:29:13.860903   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:14.077860   13232 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 00:29:16.741746   13232 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6637188s)
	I0722 00:29:16.753701   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 00:29:16.793574   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:29:16.832300   13232 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 00:29:17.052568   13232 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 00:29:17.257576   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:17.464695   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 00:29:17.508596   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:29:17.560811   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:17.752472   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 00:29:17.872085   13232 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 00:29:17.886983   13232 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 00:29:17.896708   13232 start.go:563] Will wait 60s for crictl version
	I0722 00:29:17.911396   13232 ssh_runner.go:195] Run: which crictl
	I0722 00:29:17.932497   13232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:29:17.987436   13232 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 00:29:17.997600   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:29:18.048331   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:29:18.106976   13232 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 00:29:18.106976   13232 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0722 00:29:18.111177   13232 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0722 00:29:18.111177   13232 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0722 00:29:18.111177   13232 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0722 00:29:18.111177   13232 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e8:0a:ec Flags:up|broadcast|multicast|running}
	I0722 00:29:18.113656   13232 ip.go:210] interface addr: fe80::cedd:59ec:4db2:d0bf/64
	I0722 00:29:18.113656   13232 ip.go:210] interface addr: 172.28.192.1/20
	I0722 00:29:18.127142   13232 ssh_runner.go:195] Run: grep 172.28.192.1	host.minikube.internal$ /etc/hosts
	I0722 00:29:18.134015   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:29:18.171733   13232 kubeadm.go:883] updating cluster {Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:29:18.171733   13232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 00:29:18.182418   13232 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 00:29:18.209545   13232 docker.go:685] Got preloaded images: 
	I0722 00:29:18.209646   13232 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0722 00:29:18.223151   13232 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 00:29:18.252303   13232 ssh_runner.go:195] Run: which lz4
	I0722 00:29:18.259956   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0722 00:29:18.269992   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:29:18.277247   13232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:29:18.277339   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0722 00:29:20.396559   13232 docker.go:649] duration metric: took 2.1362343s to copy over tarball
	I0722 00:29:20.408651   13232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:29:28.874742   13232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4659838s)
	I0722 00:29:28.874742   13232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:29:28.938472   13232 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 00:29:28.956602   13232 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0722 00:29:29.004598   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:29.226490   13232 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 00:29:32.700694   13232 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.4741592s)
	I0722 00:29:32.711067   13232 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 00:29:32.737505   13232 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0722 00:29:32.737505   13232 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:29:32.737641   13232 kubeadm.go:934] updating node { 172.28.196.103 8443 v1.30.3 docker true true} ...
	I0722 00:29:32.737846   13232 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-474700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.196.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:29:32.748350   13232 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0722 00:29:32.785700   13232 cni.go:84] Creating CNI manager for ""
	I0722 00:29:32.785700   13232 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 00:29:32.785700   13232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:29:32.785700   13232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.196.103 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-474700 NodeName:ha-474700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.196.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.196.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:29:32.785700   13232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.196.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-474700"
	  kubeletExtraArgs:
	    node-ip: 172.28.196.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.196.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:29:32.785700   13232 kube-vip.go:115] generating kube-vip config ...
	I0722 00:29:32.798468   13232 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 00:29:32.824431   13232 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 00:29:32.824431   13232 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0722 00:29:32.838539   13232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:29:32.853043   13232 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:29:32.866381   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0722 00:29:32.883950   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0722 00:29:32.913256   13232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:29:32.944871   13232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0722 00:29:32.973650   13232 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0722 00:29:33.017291   13232 ssh_runner.go:195] Run: grep 172.28.207.254	control-plane.minikube.internal$ /etc/hosts
	I0722 00:29:33.026777   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:29:33.068588   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:33.271918   13232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:29:33.302647   13232 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700 for IP: 172.28.196.103
	I0722 00:29:33.302647   13232 certs.go:194] generating shared ca certs ...
	I0722 00:29:33.302647   13232 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.303606   13232 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0722 00:29:33.304024   13232 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0722 00:29:33.304191   13232 certs.go:256] generating profile certs ...
	I0722 00:29:33.304822   13232 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.key
	I0722 00:29:33.304998   13232 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.crt with IP's: []
	I0722 00:29:33.484309   13232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.crt ...
	I0722 00:29:33.485347   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.crt: {Name:mk6ec30550eeb2a591a614a0b36b22c6fae9522e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.486633   13232 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.key ...
	I0722 00:29:33.486633   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.key: {Name:mk3f09d2f0d20bdf458943336f4c23c48dfcdc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.487579   13232 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.8458c8ba
	I0722 00:29:33.487579   13232 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.8458c8ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.196.103 172.28.207.254]
	I0722 00:29:33.646137   13232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.8458c8ba ...
	I0722 00:29:33.646137   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.8458c8ba: {Name:mkc0f1f56dd689a73b6dc1cf40052e2e4287fef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.647387   13232 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.8458c8ba ...
	I0722 00:29:33.647387   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.8458c8ba: {Name:mkb8fab9785e7362a737eb82dc6d8bb058fa3c57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.649325   13232 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.8458c8ba -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt
	I0722 00:29:33.661757   13232 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.8458c8ba -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key
	I0722 00:29:33.664425   13232 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key
	I0722 00:29:33.664659   13232 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt with IP's: []
	I0722 00:29:33.843913   13232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt ...
	I0722 00:29:33.843913   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt: {Name:mk1eb944a45c1547219466417810d4bdfb6e46f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.845083   13232 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key ...
	I0722 00:29:33.846109   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key: {Name:mkb018250d4b82cd4a539b76f9426ffa11a19feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.847385   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 00:29:33.847584   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0722 00:29:33.847795   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 00:29:33.847980   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 00:29:33.848190   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 00:29:33.848333   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 00:29:33.848522   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 00:29:33.858746   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 00:29:33.859258   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem (1338 bytes)
	W0722 00:29:33.859893   13232 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100_empty.pem, impossibly tiny 0 bytes
	I0722 00:29:33.859893   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0722 00:29:33.860296   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0722 00:29:33.860548   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0722 00:29:33.860860   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0722 00:29:33.861153   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem (1708 bytes)
	I0722 00:29:33.861704   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem -> /usr/share/ca-certificates/5100.pem
	I0722 00:29:33.861881   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /usr/share/ca-certificates/51002.pem
	I0722 00:29:33.862016   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:29:33.863431   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:29:33.910143   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:29:33.957775   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:29:34.004195   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 00:29:34.053748   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:29:34.097753   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 00:29:34.143517   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:29:34.188521   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:29:34.237403   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem --> /usr/share/ca-certificates/5100.pem (1338 bytes)
	I0722 00:29:34.283066   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /usr/share/ca-certificates/51002.pem (1708 bytes)
	I0722 00:29:34.327697   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:29:34.377917   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:29:34.421282   13232 ssh_runner.go:195] Run: openssl version
	I0722 00:29:34.442067   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5100.pem && ln -fs /usr/share/ca-certificates/5100.pem /etc/ssl/certs/5100.pem"
	I0722 00:29:34.473642   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5100.pem
	I0722 00:29:34.480797   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:45 /usr/share/ca-certificates/5100.pem
	I0722 00:29:34.493209   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5100.pem
	I0722 00:29:34.514239   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5100.pem /etc/ssl/certs/51391683.0"
	I0722 00:29:34.547267   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51002.pem && ln -fs /usr/share/ca-certificates/51002.pem /etc/ssl/certs/51002.pem"
	I0722 00:29:34.583885   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51002.pem
	I0722 00:29:34.592100   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:45 /usr/share/ca-certificates/51002.pem
	I0722 00:29:34.604316   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51002.pem
	I0722 00:29:34.624306   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51002.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:29:34.655760   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:29:34.686734   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:29:34.693826   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:29:34.705111   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:29:34.725549   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:29:34.755132   13232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:29:34.765949   13232 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 00:29:34.766304   13232 kubeadm.go:392] StartCluster: {Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:29:34.775036   13232 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 00:29:34.812153   13232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:29:34.851674   13232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:29:34.883342   13232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:29:34.898716   13232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:29:34.898716   13232 kubeadm.go:157] found existing configuration files:
	
	I0722 00:29:34.909893   13232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:29:34.926930   13232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:29:34.940064   13232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:29:34.973371   13232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:29:34.990117   13232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:29:35.001439   13232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:29:35.028383   13232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:29:35.045384   13232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:29:35.056404   13232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:29:35.085496   13232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:29:35.101644   13232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:29:35.112898   13232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:29:35.129791   13232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:29:35.574823   13232 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:29:49.968026   13232 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:29:49.968026   13232 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:29:49.968026   13232 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:29:49.968026   13232 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:29:49.968026   13232 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:29:49.968026   13232 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:29:49.972935   13232 out.go:204]   - Generating certificates and keys ...
	I0722 00:29:49.972935   13232 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:29:49.972935   13232 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:29:49.973484   13232 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 00:29:49.973696   13232 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 00:29:49.973968   13232 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 00:29:49.974163   13232 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 00:29:49.974316   13232 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 00:29:49.974855   13232 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-474700 localhost] and IPs [172.28.196.103 127.0.0.1 ::1]
	I0722 00:29:49.975054   13232 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 00:29:49.975337   13232 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-474700 localhost] and IPs [172.28.196.103 127.0.0.1 ::1]
	I0722 00:29:49.975552   13232 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 00:29:49.975552   13232 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 00:29:49.975552   13232 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 00:29:49.975552   13232 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:29:49.976399   13232 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:29:49.976606   13232 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:29:49.976785   13232 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:29:49.977132   13232 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:29:49.977292   13232 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:29:49.977446   13232 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:29:49.977446   13232 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:29:49.979910   13232 out.go:204]   - Booting up control plane ...
	I0722 00:29:49.979910   13232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:29:49.981118   13232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:29:49.981118   13232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:29:49.981118   13232 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:29:49.981664   13232 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:29:49.981841   13232 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:29:49.981934   13232 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:29:49.981934   13232 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:29:49.981934   13232 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.272913ms
	I0722 00:29:49.982620   13232 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:29:49.982832   13232 kubeadm.go:310] [api-check] The API server is healthy after 9.147227816s
	I0722 00:29:49.983112   13232 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:29:49.983369   13232 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:29:49.983369   13232 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:29:49.983776   13232 kubeadm.go:310] [mark-control-plane] Marking the node ha-474700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:29:49.983776   13232 kubeadm.go:310] [bootstrap-token] Using token: 1axj62.jwf7mo13iodfl6h7
	I0722 00:29:49.988738   13232 out.go:204]   - Configuring RBAC rules ...
	I0722 00:29:49.988738   13232 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:29:49.989509   13232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:29:49.989509   13232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:29:49.989509   13232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:29:49.989509   13232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:29:49.989509   13232 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:29:49.990697   13232 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:29:49.990697   13232 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:29:49.990697   13232 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:29:49.990697   13232 kubeadm.go:310] 
	I0722 00:29:49.990697   13232 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:29:49.990697   13232 kubeadm.go:310] 
	I0722 00:29:49.990697   13232 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:29:49.990697   13232 kubeadm.go:310] 
	I0722 00:29:49.991313   13232 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:29:49.991394   13232 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:29:49.991607   13232 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:29:49.991607   13232 kubeadm.go:310] 
	I0722 00:29:49.991750   13232 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:29:49.991750   13232 kubeadm.go:310] 
	I0722 00:29:49.991750   13232 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:29:49.991750   13232 kubeadm.go:310] 
	I0722 00:29:49.991750   13232 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:29:49.991750   13232 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:29:49.992280   13232 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:29:49.992280   13232 kubeadm.go:310] 
	I0722 00:29:49.992460   13232 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:29:49.992680   13232 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:29:49.992680   13232 kubeadm.go:310] 
	I0722 00:29:49.992680   13232 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1axj62.jwf7mo13iodfl6h7 \
	I0722 00:29:49.992680   13232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b \
	I0722 00:29:49.993243   13232 kubeadm.go:310] 	--control-plane 
	I0722 00:29:49.993243   13232 kubeadm.go:310] 
	I0722 00:29:49.993243   13232 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:29:49.993243   13232 kubeadm.go:310] 
	I0722 00:29:49.993243   13232 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1axj62.jwf7mo13iodfl6h7 \
	I0722 00:29:49.993809   13232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b 
	I0722 00:29:49.993809   13232 cni.go:84] Creating CNI manager for ""
	I0722 00:29:49.993809   13232 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 00:29:49.995816   13232 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0722 00:29:50.010151   13232 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0722 00:29:50.018846   13232 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0722 00:29:50.018846   13232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0722 00:29:50.067209   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0722 00:29:50.745732   13232 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:29:50.758776   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:50.758776   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-474700 minikube.k8s.io/updated_at=2024_07_22T00_29_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-474700 minikube.k8s.io/primary=true
	I0722 00:29:50.778945   13232 ops.go:34] apiserver oom_adj: -16
	I0722 00:29:50.976668   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:51.483758   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:51.988268   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:52.488134   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:52.984858   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:53.485672   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:53.989050   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:54.494583   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:54.978642   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:55.492111   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:55.993884   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:56.481797   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:56.981947   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:57.494095   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:57.988849   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:58.481027   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:58.987955   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:59.488870   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:59.978191   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:00.494860   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:00.982173   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:01.488311   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:01.985328   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:02.479630   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:02.667045   13232 kubeadm.go:1113] duration metric: took 11.9211221s to wait for elevateKubeSystemPrivileges
	I0722 00:30:02.667226   13232 kubeadm.go:394] duration metric: took 27.9005673s to StartCluster
	I0722 00:30:02.667324   13232 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:02.667604   13232 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:30:02.668982   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:02.670547   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0722 00:30:02.670788   13232 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:30:02.670851   13232 start.go:241] waiting for startup goroutines ...
	I0722 00:30:02.670788   13232 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:30:02.670948   13232 addons.go:69] Setting storage-provisioner=true in profile "ha-474700"
	I0722 00:30:02.670948   13232 addons.go:69] Setting default-storageclass=true in profile "ha-474700"
	I0722 00:30:02.670948   13232 addons.go:234] Setting addon storage-provisioner=true in "ha-474700"
	I0722 00:30:02.670948   13232 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-474700"
	I0722 00:30:02.670948   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:30:02.670948   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:30:02.671903   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:30:02.672411   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:30:02.918795   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0722 00:30:03.330484   13232 start.go:971] {"host.minikube.internal": 172.28.192.1} host record injected into CoreDNS's ConfigMap
	I0722 00:30:05.083688   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:05.083688   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:05.087304   13232 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:30:05.088066   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:05.088066   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:05.089756   13232 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:30:05.090310   13232 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:30:05.090350   13232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:30:05.090429   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:30:05.090636   13232 kapi.go:59] client config for ha-474700: &rest.Config{Host:"https://172.28.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2085e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 00:30:05.091972   13232 cert_rotation.go:137] Starting client certificate rotation controller
	I0722 00:30:05.092638   13232 addons.go:234] Setting addon default-storageclass=true in "ha-474700"
	I0722 00:30:05.092680   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:30:05.093401   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:30:07.652322   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:07.652380   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:07.652414   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:30:07.652414   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:07.652414   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:07.652414   13232 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:30:07.652414   13232 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:30:07.652414   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:30:10.058501   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:10.058501   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:10.058828   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:30:10.524513   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:30:10.525617   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:10.526104   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:30:10.664786   13232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:30:12.808109   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:30:12.808167   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:12.808513   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:30:12.949995   13232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:30:13.117236   13232 round_trippers.go:463] GET https://172.28.207.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0722 00:30:13.117320   13232 round_trippers.go:469] Request Headers:
	I0722 00:30:13.117640   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:30:13.117640   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:30:13.130268   13232 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0722 00:30:13.132255   13232 round_trippers.go:463] PUT https://172.28.207.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0722 00:30:13.132255   13232 round_trippers.go:469] Request Headers:
	I0722 00:30:13.132336   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:30:13.132336   13232 round_trippers.go:473]     Content-Type: application/json
	I0722 00:30:13.132336   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:30:13.136234   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:30:13.141440   13232 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0722 00:30:13.145766   13232 addons.go:510] duration metric: took 10.4748451s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0722 00:30:13.145766   13232 start.go:246] waiting for cluster config update ...
	I0722 00:30:13.145766   13232 start.go:255] writing updated cluster config ...
	I0722 00:30:13.148699   13232 out.go:177] 
	I0722 00:30:13.165352   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:30:13.165352   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:30:13.172926   13232 out.go:177] * Starting "ha-474700-m02" control-plane node in "ha-474700" cluster
	I0722 00:30:13.176650   13232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 00:30:13.176650   13232 cache.go:56] Caching tarball of preloaded images
	I0722 00:30:13.177457   13232 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 00:30:13.177457   13232 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 00:30:13.177457   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:30:13.185588   13232 start.go:360] acquireMachinesLock for ha-474700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:30:13.186591   13232 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-474700-m02"
	I0722 00:30:13.186591   13232 start.go:93] Provisioning new machine with config: &{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:30:13.186591   13232 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0722 00:30:13.189622   13232 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 00:30:13.189622   13232 start.go:159] libmachine.API.Create for "ha-474700" (driver="hyperv")
	I0722 00:30:13.189622   13232 client.go:168] LocalClient.Create starting
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:30:13.190607   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0722 00:30:15.141301   13232 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0722 00:30:15.142350   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:15.142350   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0722 00:30:17.044647   13232 main.go:141] libmachine: [stdout =====>] : False
	
	I0722 00:30:17.044791   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:17.044791   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:30:18.589777   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:30:18.590014   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:18.590014   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:30:22.258033   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:30:22.258596   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:22.260967   13232 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 00:30:22.748306   13232 main.go:141] libmachine: Creating SSH key...
	I0722 00:30:22.841237   13232 main.go:141] libmachine: Creating VM...
	I0722 00:30:22.841237   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:30:25.790856   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:30:25.790856   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:25.790856   13232 main.go:141] libmachine: Using switch "Default Switch"
	I0722 00:30:25.791269   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:30:27.596087   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:30:27.596755   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:27.596755   13232 main.go:141] libmachine: Creating VHD
	I0722 00:30:27.596755   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0722 00:30:31.585878   13232 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B8070AB8-BC9B-4CF6-8C0B-12CB14282C2C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0722 00:30:31.585878   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:31.585878   13232 main.go:141] libmachine: Writing magic tar header
	I0722 00:30:31.585878   13232 main.go:141] libmachine: Writing SSH key tar header
	I0722 00:30:31.597394   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0722 00:30:34.863492   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:34.864442   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:34.864528   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\disk.vhd' -SizeBytes 20000MB
	I0722 00:30:37.489805   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:37.489805   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:37.489805   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-474700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0722 00:30:41.252887   13232 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-474700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0722 00:30:41.252968   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:41.252968   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-474700-m02 -DynamicMemoryEnabled $false
	I0722 00:30:43.633211   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:43.633211   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:43.633211   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-474700-m02 -Count 2
	I0722 00:30:45.902186   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:45.902186   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:45.902297   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-474700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\boot2docker.iso'
	I0722 00:30:48.613280   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:48.613280   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:48.613280   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-474700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\disk.vhd'
	I0722 00:30:51.337962   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:51.337962   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:51.337962   13232 main.go:141] libmachine: Starting VM...
	I0722 00:30:51.338622   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-474700-m02
	I0722 00:30:54.526901   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:54.527080   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:54.527080   13232 main.go:141] libmachine: Waiting for host to start...
	I0722 00:30:54.527080   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:30:56.954495   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:56.954495   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:56.954495   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:30:59.605427   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:59.605427   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:00.611425   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:02.956736   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:02.957138   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:02.957201   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:05.566151   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:31:05.566714   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:06.574724   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:08.898856   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:08.898856   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:08.898856   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:11.552542   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:31:11.552934   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:12.556467   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:14.816279   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:14.816740   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:14.816931   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:17.431936   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:31:17.432199   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:18.435950   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:20.790432   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:20.790517   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:20.790517   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:23.401609   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:23.401932   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:23.402004   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:25.691254   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:25.692007   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:25.692270   13232 machine.go:94] provisionDockerMachine start ...
	I0722 00:31:25.692270   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:28.102838   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:28.102838   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:28.102954   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:30.832390   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:30.832632   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:30.837401   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:31:30.848587   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:31:30.848587   13232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:31:30.987963   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:31:30.987963   13232 buildroot.go:166] provisioning hostname "ha-474700-m02"
	I0722 00:31:30.988079   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:33.210564   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:33.210650   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:33.210650   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:35.779839   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:35.779839   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:35.784421   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:31:35.785346   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:31:35.785346   13232 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-474700-m02 && echo "ha-474700-m02" | sudo tee /etc/hostname
	I0722 00:31:35.945908   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-474700-m02
	
	I0722 00:31:35.946066   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:38.129204   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:38.129204   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:38.129204   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:40.740804   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:40.741541   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:40.747901   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:31:40.748526   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:31:40.748526   13232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-474700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-474700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-474700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:31:40.897291   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:31:40.897291   13232 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0722 00:31:40.897291   13232 buildroot.go:174] setting up certificates
	I0722 00:31:40.897291   13232 provision.go:84] configureAuth start
	I0722 00:31:40.897291   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:43.102290   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:43.102290   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:43.102290   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:45.728549   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:45.728549   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:45.728549   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:47.956379   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:47.956379   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:47.957245   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:50.602201   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:50.602935   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:50.603231   13232 provision.go:143] copyHostCerts
	I0722 00:31:50.603231   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0722 00:31:50.603762   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0722 00:31:50.603762   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0722 00:31:50.604212   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0722 00:31:50.605565   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0722 00:31:50.605955   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0722 00:31:50.605982   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0722 00:31:50.605982   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0722 00:31:50.606936   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0722 00:31:50.607628   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0722 00:31:50.607628   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0722 00:31:50.607708   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0722 00:31:50.608894   13232 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-474700-m02 san=[127.0.0.1 172.28.200.182 ha-474700-m02 localhost minikube]
	I0722 00:31:50.864431   13232 provision.go:177] copyRemoteCerts
	I0722 00:31:50.875257   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:31:50.876204   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:53.104195   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:53.105250   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:53.105250   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:55.745667   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:55.746179   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:55.746179   13232 sshutil.go:53] new ssh client: &{IP:172.28.200.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\id_rsa Username:docker}
	I0722 00:31:55.859007   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9836871s)
	I0722 00:31:55.859113   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0722 00:31:55.859350   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 00:31:55.908563   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0722 00:31:55.909043   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 00:31:55.962811   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0722 00:31:55.963010   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:31:56.011405   13232 provision.go:87] duration metric: took 15.1139223s to configureAuth
	I0722 00:31:56.011469   13232 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:31:56.012051   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:31:56.012051   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:58.284847   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:58.284914   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:58.284971   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:00.890390   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:00.890829   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:00.896407   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:32:00.897170   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:32:00.897170   13232 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 00:32:01.033921   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 00:32:01.033921   13232 buildroot.go:70] root file system type: tmpfs
	I0722 00:32:01.034305   13232 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 00:32:01.034305   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:03.227013   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:03.227800   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:03.227879   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:05.882398   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:05.883394   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:05.889303   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:32:05.889522   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:32:05.889522   13232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.196.103"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 00:32:06.067572   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.196.103
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 00:32:06.067572   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:08.274785   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:08.275034   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:08.275109   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:10.966387   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:10.966387   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:10.972755   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:32:10.973419   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:32:10.973419   13232 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 00:32:13.240949   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 00:32:13.241015   13232 machine.go:97] duration metric: took 47.5481423s to provisionDockerMachine
	I0722 00:32:13.241015   13232 client.go:171] duration metric: took 2m0.0498695s to LocalClient.Create
	I0722 00:32:13.241076   13232 start.go:167] duration metric: took 2m0.0499311s to libmachine.API.Create "ha-474700"
	I0722 00:32:13.241076   13232 start.go:293] postStartSetup for "ha-474700-m02" (driver="hyperv")
	I0722 00:32:13.241157   13232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:32:13.252446   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:32:13.252446   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:15.444356   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:15.444356   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:15.445350   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:18.058772   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:18.058772   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:18.058772   13232 sshutil.go:53] new ssh client: &{IP:172.28.200.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\id_rsa Username:docker}
	I0722 00:32:18.169839   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9172163s)
	I0722 00:32:18.181671   13232 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:32:18.187528   13232 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:32:18.187528   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0722 00:32:18.188097   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0722 00:32:18.188969   13232 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0722 00:32:18.188969   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0722 00:32:18.200823   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:32:18.218163   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0722 00:32:18.263010   13232 start.go:296] duration metric: took 5.0218706s for postStartSetup
	I0722 00:32:18.266077   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:20.457590   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:20.457590   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:20.457590   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:23.084801   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:23.085075   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:23.085302   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:32:23.087998   13232 start.go:128] duration metric: took 2m9.8997595s to createHost
	I0722 00:32:23.087998   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:25.357336   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:25.357336   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:25.357336   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:28.064721   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:28.064721   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:28.076076   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:32:28.076661   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:32:28.076661   13232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:32:28.221654   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721608348.229095581
	
	I0722 00:32:28.221654   13232 fix.go:216] guest clock: 1721608348.229095581
	I0722 00:32:28.221654   13232 fix.go:229] Guest: 2024-07-22 00:32:28.229095581 +0000 UTC Remote: 2024-07-22 00:32:23.0879982 +0000 UTC m=+344.013497901 (delta=5.141097381s)
	I0722 00:32:28.222199   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:30.472938   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:30.472938   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:30.473131   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:33.096407   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:33.096407   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:33.102650   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:32:33.103227   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:32:33.103227   13232 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721608348
	I0722 00:32:33.261147   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jul 22 00:32:28 UTC 2024
	
	I0722 00:32:33.261211   13232 fix.go:236] clock set: Mon Jul 22 00:32:28 UTC 2024
	 (err=<nil>)
	I0722 00:32:33.261211   13232 start.go:83] releasing machines lock for "ha-474700-m02", held for 2m20.0728446s
	I0722 00:32:33.261432   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:35.512963   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:35.512963   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:35.513684   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:38.166784   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:38.167551   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:38.170885   13232 out.go:177] * Found network options:
	I0722 00:32:38.174111   13232 out.go:177]   - NO_PROXY=172.28.196.103
	W0722 00:32:38.177988   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 00:32:38.180443   13232 out.go:177]   - NO_PROXY=172.28.196.103
	W0722 00:32:38.183470   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 00:32:38.184555   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 00:32:38.186565   13232 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0722 00:32:38.186565   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:38.196514   13232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 00:32:38.196514   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:40.502778   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:40.502778   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:40.502778   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:40.502778   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:40.502778   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:40.503845   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:43.293995   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:43.294147   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:43.294403   13232 sshutil.go:53] new ssh client: &{IP:172.28.200.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\id_rsa Username:docker}
	I0722 00:32:43.326711   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:43.326711   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:43.327705   13232 sshutil.go:53] new ssh client: &{IP:172.28.200.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\id_rsa Username:docker}
	I0722 00:32:43.402876   13232 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2062964s)
	W0722 00:32:43.402876   13232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:32:43.415421   13232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:32:43.420503   13232 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2338723s)
	W0722 00:32:43.420503   13232 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0722 00:32:43.447831   13232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:32:43.447831   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:32:43.447831   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:32:43.495734   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 00:32:43.528176   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0722 00:32:43.538498   13232 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0722 00:32:43.538498   13232 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0722 00:32:43.553235   13232 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 00:32:43.565792   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 00:32:43.598126   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:32:43.628988   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 00:32:43.661088   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:32:43.693494   13232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:32:43.727743   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 00:32:43.759158   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 00:32:43.789520   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 00:32:43.821489   13232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:32:43.849725   13232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:32:43.881804   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:32:44.094365   13232 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 00:32:44.127015   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:32:44.138426   13232 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 00:32:44.173331   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:32:44.218434   13232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:32:44.262481   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:32:44.307163   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:32:44.345094   13232 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 00:32:44.404693   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:32:44.428982   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:32:44.474901   13232 ssh_runner.go:195] Run: which cri-dockerd
	I0722 00:32:44.492978   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 00:32:44.512026   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 00:32:44.556017   13232 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 00:32:44.748279   13232 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 00:32:44.940396   13232 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 00:32:44.940589   13232 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 00:32:44.988131   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:32:45.191921   13232 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 00:32:47.834316   13232 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6423618s)
	I0722 00:32:47.845789   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 00:32:47.880608   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:32:47.915622   13232 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 00:32:48.128204   13232 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 00:32:48.328212   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:32:48.545087   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 00:32:48.585516   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:32:48.618785   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:32:48.814005   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 00:32:48.920286   13232 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 00:32:48.932135   13232 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 00:32:48.941804   13232 start.go:563] Will wait 60s for crictl version
	I0722 00:32:48.952104   13232 ssh_runner.go:195] Run: which crictl
	I0722 00:32:48.970352   13232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:32:49.024710   13232 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 00:32:49.031618   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:32:49.081617   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:32:49.118513   13232 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 00:32:49.121988   13232 out.go:177]   - env NO_PROXY=172.28.196.103
	I0722 00:32:49.127060   13232 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0722 00:32:49.130991   13232 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0722 00:32:49.130991   13232 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0722 00:32:49.130991   13232 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0722 00:32:49.130991   13232 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e8:0a:ec Flags:up|broadcast|multicast|running}
	I0722 00:32:49.133996   13232 ip.go:210] interface addr: fe80::cedd:59ec:4db2:d0bf/64
	I0722 00:32:49.133996   13232 ip.go:210] interface addr: 172.28.192.1/20
	I0722 00:32:49.144994   13232 ssh_runner.go:195] Run: grep 172.28.192.1	host.minikube.internal$ /etc/hosts
	I0722 00:32:49.156106   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:32:49.177454   13232 mustload.go:65] Loading cluster: ha-474700
	I0722 00:32:49.178115   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:32:49.178115   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:32:51.380908   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:51.381584   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:51.381664   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:32:51.382519   13232 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700 for IP: 172.28.200.182
	I0722 00:32:51.382519   13232 certs.go:194] generating shared ca certs ...
	I0722 00:32:51.382519   13232 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:32:51.383760   13232 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0722 00:32:51.384340   13232 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0722 00:32:51.384655   13232 certs.go:256] generating profile certs ...
	I0722 00:32:51.385280   13232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.key
	I0722 00:32:51.385410   13232 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.c396fe80
	I0722 00:32:51.385623   13232 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.c396fe80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.196.103 172.28.200.182 172.28.207.254]
	I0722 00:32:51.553909   13232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.c396fe80 ...
	I0722 00:32:51.553909   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.c396fe80: {Name:mka6070aeb7f4cde3be31aaa596d95d9c034e587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:32:51.555670   13232 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.c396fe80 ...
	I0722 00:32:51.555670   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.c396fe80: {Name:mk6afa224509e2e4545fafb434a2e97f50f307ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:32:51.556729   13232 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.c396fe80 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt
	I0722 00:32:51.569613   13232 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.c396fe80 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key
	I0722 00:32:51.571754   13232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key
	I0722 00:32:51.571754   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 00:32:51.572089   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0722 00:32:51.572231   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 00:32:51.572423   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 00:32:51.572607   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 00:32:51.572607   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 00:32:51.572832   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 00:32:51.572832   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 00:32:51.573434   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem (1338 bytes)
	W0722 00:32:51.573815   13232 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100_empty.pem, impossibly tiny 0 bytes
	I0722 00:32:51.573815   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0722 00:32:51.574153   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0722 00:32:51.574533   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0722 00:32:51.574765   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0722 00:32:51.575036   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem (1708 bytes)
	I0722 00:32:51.575554   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /usr/share/ca-certificates/51002.pem
	I0722 00:32:51.575774   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:32:51.575943   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem -> /usr/share/ca-certificates/5100.pem
	I0722 00:32:51.576127   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:32:53.774921   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:53.774958   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:53.775105   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:56.397483   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:32:56.397483   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:56.397782   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:32:56.506834   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0722 00:32:56.515655   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0722 00:32:56.550345   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0722 00:32:56.557248   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0722 00:32:56.588373   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0722 00:32:56.594475   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0722 00:32:56.626291   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0722 00:32:56.633194   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0722 00:32:56.663279   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0722 00:32:56.669787   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0722 00:32:56.699138   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0722 00:32:56.705828   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0722 00:32:56.729850   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:32:56.776166   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:32:56.818868   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:32:56.870982   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 00:32:56.922233   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0722 00:32:56.971389   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:32:57.020551   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:32:57.081073   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:32:57.139509   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /usr/share/ca-certificates/51002.pem (1708 bytes)
	I0722 00:32:57.188037   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:32:57.241859   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem --> /usr/share/ca-certificates/5100.pem (1338 bytes)
	I0722 00:32:57.287990   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0722 00:32:57.319658   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0722 00:32:57.351886   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0722 00:32:57.388110   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0722 00:32:57.419666   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0722 00:32:57.451263   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0722 00:32:57.485749   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0722 00:32:57.544292   13232 ssh_runner.go:195] Run: openssl version
	I0722 00:32:57.565401   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51002.pem && ln -fs /usr/share/ca-certificates/51002.pem /etc/ssl/certs/51002.pem"
	I0722 00:32:57.601097   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51002.pem
	I0722 00:32:57.607937   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:45 /usr/share/ca-certificates/51002.pem
	I0722 00:32:57.621507   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51002.pem
	I0722 00:32:57.642603   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51002.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:32:57.672729   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:32:57.704656   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:32:57.713016   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:32:57.725638   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:32:57.747604   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:32:57.779563   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5100.pem && ln -fs /usr/share/ca-certificates/5100.pem /etc/ssl/certs/5100.pem"
	I0722 00:32:57.810050   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5100.pem
	I0722 00:32:57.816659   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:45 /usr/share/ca-certificates/5100.pem
	I0722 00:32:57.828122   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5100.pem
	I0722 00:32:57.847891   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5100.pem /etc/ssl/certs/51391683.0"
	I0722 00:32:57.877407   13232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:32:57.884040   13232 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 00:32:57.884040   13232 kubeadm.go:934] updating node {m02 172.28.200.182 8443 v1.30.3 docker true true} ...
	I0722 00:32:57.884922   13232 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-474700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.200.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:32:57.884922   13232 kube-vip.go:115] generating kube-vip config ...
	I0722 00:32:57.898718   13232 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 00:32:57.925136   13232 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 00:32:57.925270   13232 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 00:32:57.937925   13232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:32:57.954474   13232 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0722 00:32:57.967758   13232 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0722 00:32:57.992420   13232 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl
	I0722 00:32:57.993221   13232 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm
	I0722 00:32:57.993221   13232 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet
	I0722 00:32:59.191230   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 00:32:59.203913   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 00:32:59.211674   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0722 00:32:59.211674   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0722 00:33:00.424513   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 00:33:00.435494   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 00:33:00.443643   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0722 00:33:00.443643   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0722 00:33:02.102341   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:33:02.129355   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 00:33:02.141937   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 00:33:02.149897   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0722 00:33:02.150120   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0722 00:33:02.725260   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0722 00:33:02.745718   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0722 00:33:02.776743   13232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:33:02.807290   13232 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0722 00:33:02.851102   13232 ssh_runner.go:195] Run: grep 172.28.207.254	control-plane.minikube.internal$ /etc/hosts
	I0722 00:33:02.857981   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:33:02.892335   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:33:03.102108   13232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:33:03.130995   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:33:03.131830   13232 start.go:317] joinCluster: &{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.200.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:33:03.132070   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0722 00:33:03.132218   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:33:05.318159   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:33:05.318159   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:33:05.318159   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:33:07.951907   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:33:07.951907   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:33:07.952173   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:33:08.189099   13232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0569656s)
	I0722 00:33:08.189099   13232 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.200.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:33:08.189099   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mt5mtd.9u04lu9vp7f5a7d3 --discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-474700-m02 --control-plane --apiserver-advertise-address=172.28.200.182 --apiserver-bind-port=8443"
	I0722 00:33:53.619376   13232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mt5mtd.9u04lu9vp7f5a7d3 --discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-474700-m02 --control-plane --apiserver-advertise-address=172.28.200.182 --apiserver-bind-port=8443": (45.429709s)
	I0722 00:33:53.620353   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0722 00:33:54.442608   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-474700-m02 minikube.k8s.io/updated_at=2024_07_22T00_33_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-474700 minikube.k8s.io/primary=false
	I0722 00:33:55.120210   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-474700-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0722 00:33:55.313557   13232 start.go:319] duration metric: took 52.1810745s to joinCluster
	I0722 00:33:55.313901   13232 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.28.200.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:33:55.314477   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:33:55.317644   13232 out.go:177] * Verifying Kubernetes components...
	I0722 00:33:55.332050   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:33:55.745405   13232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:33:55.773616   13232 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:33:55.774436   13232 kapi.go:59] client config for ha-474700: &rest.Config{Host:"https://172.28.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2085e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0722 00:33:55.774611   13232 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.207.254:8443 with https://172.28.196.103:8443
	I0722 00:33:55.775721   13232 node_ready.go:35] waiting up to 6m0s for node "ha-474700-m02" to be "Ready" ...
	I0722 00:33:55.775959   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:55.776014   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:55.776014   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:55.776061   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:55.798288   13232 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0722 00:33:56.289761   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:56.289848   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:56.289848   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:56.289934   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:56.297160   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:33:56.783141   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:56.783440   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:56.783440   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:56.783440   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:56.788245   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:33:57.278747   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:57.278809   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:57.278809   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:57.278809   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:57.284244   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:33:57.787454   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:57.787752   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:57.787752   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:57.787752   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:57.794077   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:33:57.794077   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:33:58.279829   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:58.279896   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:58.279896   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:58.279896   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:58.290506   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:33:58.786134   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:58.786134   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:58.786134   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:58.786134   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:58.793152   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:33:59.279723   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:59.279947   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:59.279947   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:59.279947   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:59.299564   13232 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0722 00:33:59.790738   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:59.790830   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:59.790830   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:59.790887   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:59.796481   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:33:59.797645   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:00.281101   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:00.281101   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:00.281101   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:00.281101   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:00.286117   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:00.789417   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:00.789584   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:00.789736   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:00.789736   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:00.795398   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:01.278707   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:01.278707   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:01.278707   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:01.278707   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:01.285111   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:01.786625   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:01.786625   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:01.786718   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:01.786718   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:01.791430   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:02.281722   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:02.281808   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:02.281808   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:02.281808   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:02.289563   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:34:02.291083   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:02.784781   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:02.785068   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:02.785068   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:02.785068   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:02.950295   13232 round_trippers.go:574] Response Status: 200 OK in 165 milliseconds
	I0722 00:34:03.286504   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:03.286504   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:03.286504   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:03.286504   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:03.293090   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:03.785806   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:03.786111   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:03.786156   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:03.786156   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:03.795007   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:34:04.288523   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:04.288523   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:04.288838   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:04.288838   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:04.298809   13232 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 00:34:04.299812   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:04.790884   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:04.790884   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:04.791088   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:04.791088   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:04.796111   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:05.291764   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:05.291764   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:05.291764   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:05.291764   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:05.297370   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:05.776369   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:05.776369   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:05.776369   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:05.776369   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:05.780870   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:06.278203   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:06.278203   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:06.278327   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:06.278327   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:06.282908   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:06.777527   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:06.777527   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:06.777527   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:06.777527   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:06.782949   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:06.784509   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:07.286772   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:07.286995   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:07.286995   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:07.286995   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:07.295390   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:34:07.782195   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:07.782195   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:07.782195   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:07.782428   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:07.786186   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:34:08.290855   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:08.290855   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:08.290998   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:08.290998   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:08.297084   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:08.783060   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:08.783299   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:08.783299   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:08.783299   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:08.790659   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:34:08.792026   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:09.289423   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:09.289695   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:09.289695   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:09.289695   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:09.294328   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:09.780714   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:09.780778   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:09.780778   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:09.780778   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:09.786758   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:10.286800   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:10.286800   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:10.286800   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:10.286800   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:10.293409   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:10.790781   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:10.790863   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:10.790863   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:10.790863   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:10.796499   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:10.797422   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:11.290558   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:11.290558   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:11.290558   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:11.290558   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:11.297195   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:11.777203   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:11.777203   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:11.777203   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:11.777203   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:11.781597   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:12.291272   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:12.291272   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:12.291272   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:12.291272   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:12.296882   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:12.777591   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:12.777689   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:12.777689   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:12.777689   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:12.782967   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:13.282748   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:13.282748   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:13.282748   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:13.282748   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:13.286787   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:13.288961   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:13.787719   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:13.787719   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:13.787719   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:13.787719   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:13.793219   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:14.287485   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:14.287485   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:14.287485   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:14.287485   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:14.294073   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:14.785598   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:14.785863   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:14.785863   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:14.785863   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:14.790197   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:15.288893   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:15.288893   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:15.288893   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:15.288893   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:15.293663   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:15.295113   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:15.787071   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:15.787272   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:15.787272   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:15.787272   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:15.792151   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:16.289652   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:16.289761   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:16.289761   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:16.289761   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:16.295151   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:16.778744   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:16.779014   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:16.779014   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:16.779014   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:16.783589   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:17.281211   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:17.281211   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:17.281306   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:17.281306   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:17.286782   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:17.777251   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:17.777346   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:17.777346   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:17.777436   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:17.801067   13232 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0722 00:34:17.802513   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:18.281552   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:18.281552   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:18.281552   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:18.281552   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:18.288076   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:18.783473   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:18.783585   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:18.783585   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:18.783585   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:18.789229   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:19.285966   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:19.285966   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.285966   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.286089   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.291884   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:19.292156   13232 node_ready.go:49] node "ha-474700-m02" has status "Ready":"True"
	I0722 00:34:19.292708   13232 node_ready.go:38] duration metric: took 23.5166943s for node "ha-474700-m02" to be "Ready" ...
	I0722 00:34:19.292708   13232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:34:19.292958   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:34:19.292958   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.292958   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.292958   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.299065   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:19.309537   13232 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.309537   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fwrd4
	I0722 00:34:19.309537   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.309537   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.309537   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.314329   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:19.315566   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:19.315566   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.315566   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.315566   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.319106   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:34:19.320406   13232 pod_ready.go:92] pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:19.320406   13232 pod_ready.go:81] duration metric: took 10.8683ms for pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.320406   13232 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.320753   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ndgcf
	I0722 00:34:19.320753   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.320849   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.320935   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.327245   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:19.328201   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:19.328201   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.328201   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.328201   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.332953   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:19.334054   13232 pod_ready.go:92] pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:19.334054   13232 pod_ready.go:81] duration metric: took 13.301ms for pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.334054   13232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.334054   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700
	I0722 00:34:19.334054   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.334054   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.334054   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.338707   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:19.338707   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:19.338707   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.338707   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.338707   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.343940   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:19.344537   13232 pod_ready.go:92] pod "etcd-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:19.344537   13232 pod_ready.go:81] duration metric: took 10.4826ms for pod "etcd-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.344537   13232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.345166   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700-m02
	I0722 00:34:19.345218   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.345218   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.345312   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.349315   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:34:19.349941   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:19.350049   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.350049   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.350049   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.353686   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:34:19.354907   13232 pod_ready.go:92] pod "etcd-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:19.354907   13232 pod_ready.go:81] duration metric: took 10.3697ms for pod "etcd-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.354907   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.492030   13232 request.go:629] Waited for 137.1222ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700
	I0722 00:34:19.492297   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700
	I0722 00:34:19.492297   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.492297   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.492297   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.498628   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:19.694640   13232 request.go:629] Waited for 195.7499ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:19.694779   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:19.694819   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.694819   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.694819   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.708584   13232 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0722 00:34:19.709146   13232 pod_ready.go:92] pod "kube-apiserver-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:19.709146   13232 pod_ready.go:81] duration metric: took 354.2348ms for pod "kube-apiserver-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.709146   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.899512   13232 request.go:629] Waited for 189.9726ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m02
	I0722 00:34:19.899830   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m02
	I0722 00:34:19.899830   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.899830   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.899830   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.905454   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:20.088127   13232 request.go:629] Waited for 180.4673ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:20.088357   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:20.088357   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:20.088357   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:20.088357   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:20.094235   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:20.095070   13232 pod_ready.go:92] pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:20.095142   13232 pod_ready.go:81] duration metric: took 385.9917ms for pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:20.095211   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:20.289630   13232 request.go:629] Waited for 194.1867ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700
	I0722 00:34:20.289932   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700
	I0722 00:34:20.289932   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:20.290014   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:20.290014   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:20.294731   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:20.494735   13232 request.go:629] Waited for 198.6017ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:20.494832   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:20.494832   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:20.494923   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:20.494923   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:20.499879   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:20.501399   13232 pod_ready.go:92] pod "kube-controller-manager-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:20.501399   13232 pod_ready.go:81] duration metric: took 406.1833ms for pod "kube-controller-manager-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:20.501399   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:20.696593   13232 request.go:629] Waited for 194.8896ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m02
	I0722 00:34:20.696697   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m02
	I0722 00:34:20.696697   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:20.696697   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:20.696786   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:20.701997   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:20.898945   13232 request.go:629] Waited for 195.2615ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:20.899162   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:20.899162   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:20.899162   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:20.899162   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:20.905760   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:20.906974   13232 pod_ready.go:92] pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:20.907055   13232 pod_ready.go:81] duration metric: took 405.6509ms for pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:20.907111   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwkpc" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:21.087719   13232 request.go:629] Waited for 180.5074ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkpc
	I0722 00:34:21.087878   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkpc
	I0722 00:34:21.087942   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:21.087942   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:21.087942   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:21.098543   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:34:21.290538   13232 request.go:629] Waited for 189.8849ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:21.290667   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:21.290667   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:21.290667   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:21.290730   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:21.295499   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:21.295499   13232 pod_ready.go:92] pod "kube-proxy-fwkpc" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:21.295499   13232 pod_ready.go:81] duration metric: took 388.3838ms for pod "kube-proxy-fwkpc" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:21.295499   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kmnj9" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:21.492538   13232 request.go:629] Waited for 196.8842ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kmnj9
	I0722 00:34:21.492694   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kmnj9
	I0722 00:34:21.492694   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:21.492694   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:21.492694   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:21.503618   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:34:21.697095   13232 request.go:629] Waited for 192.2779ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:21.697095   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:21.697095   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:21.697095   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:21.697095   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:21.702977   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:21.704052   13232 pod_ready.go:92] pod "kube-proxy-kmnj9" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:21.704138   13232 pod_ready.go:81] duration metric: took 408.6337ms for pod "kube-proxy-kmnj9" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:21.704243   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:21.888691   13232 request.go:629] Waited for 184.1546ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700
	I0722 00:34:21.888819   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700
	I0722 00:34:21.888819   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:21.888819   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:21.888997   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:21.894719   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:22.090649   13232 request.go:629] Waited for 194.3317ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:22.090918   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:22.090918   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.090918   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.090918   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.099157   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:34:22.099157   13232 pod_ready.go:92] pod "kube-scheduler-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:22.100066   13232 pod_ready.go:81] duration metric: took 395.8185ms for pod "kube-scheduler-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:22.100066   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:22.294815   13232 request.go:629] Waited for 194.6039ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m02
	I0722 00:34:22.294947   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m02
	I0722 00:34:22.294947   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.294947   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.294947   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.303916   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:34:22.497682   13232 request.go:629] Waited for 192.255ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:22.497923   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:22.497923   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.497923   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.497923   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.502957   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:22.504383   13232 pod_ready.go:92] pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:22.504498   13232 pod_ready.go:81] duration metric: took 404.3121ms for pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:22.504498   13232 pod_ready.go:38] duration metric: took 3.2115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:34:22.504564   13232 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:34:22.517318   13232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:34:22.552204   13232 api_server.go:72] duration metric: took 27.2378505s to wait for apiserver process to appear ...
	I0722 00:34:22.552258   13232 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:34:22.552293   13232 api_server.go:253] Checking apiserver healthz at https://172.28.196.103:8443/healthz ...
	I0722 00:34:22.560133   13232 api_server.go:279] https://172.28.196.103:8443/healthz returned 200:
	ok
	I0722 00:34:22.560547   13232 round_trippers.go:463] GET https://172.28.196.103:8443/version
	I0722 00:34:22.560673   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.560673   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.560673   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.562135   13232 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 00:34:22.562135   13232 api_server.go:141] control plane version: v1.30.3
	I0722 00:34:22.562135   13232 api_server.go:131] duration metric: took 9.8424ms to wait for apiserver health ...
	I0722 00:34:22.562135   13232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:34:22.686288   13232 request.go:629] Waited for 124.1516ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:34:22.686288   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:34:22.686288   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.686288   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.686288   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.695440   13232 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 00:34:22.705822   13232 system_pods.go:59] 17 kube-system pods found
	I0722 00:34:22.705822   13232 system_pods.go:61] "coredns-7db6d8ff4d-fwrd4" [3d8cf645-4238-4079-a401-18ff3ffdbf66] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "coredns-7db6d8ff4d-ndgcf" [ce30ed50-b5a7-4742-9f83-c60ecd47dc31] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "etcd-ha-474700" [b1ca44b2-3832-4a56-8bd1-c233907d8de3] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "etcd-ha-474700-m02" [f05d667f-c484-47ec-9be9-d5fe65452238] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kindnet-kldv9" [01a2e280-762e-40bc-b79a-66e935b52f26] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kindnet-xmjbz" [c65e9a3b-0f40-4424-af70-b56d7c04018c] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-apiserver-ha-474700" [881080dc-0756-4d59-ae7f-9b1ed240dd5d] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-apiserver-ha-474700-m02" [5906cda9-2d5a-486d-acc3-babb58a51586] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-controller-manager-ha-474700" [9bbed77b-5977-48a3-9816-d3734482dd9c] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-controller-manager-ha-474700-m02" [2e24aaa1-d708-451f-bf42-9d3b887463ea] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-proxy-fwkpc" [896d5fb8-be02-42a8-8ddf-260154a34162] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-proxy-kmnj9" [6a6597e3-9ae2-43cb-8838-ce01b1e9476f] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-scheduler-ha-474700" [fc771043-36f2-49a1-9675-b647b88f692b] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-scheduler-ha-474700-m02" [dd7e08b2-b3bf-4e32-8159-73bfeb9e1c33] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-vip-ha-474700" [f6aaa6ef-c03c-4ff3-889e-dc765c688373] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-vip-ha-474700-m02" [6c94d6e9-f93f-4971-ab0d-6978c39375df] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "storage-provisioner" [f289ea73-0be9-4a29-92d2-2897ee8972a6] Running
	I0722 00:34:22.705822   13232 system_pods.go:74] duration metric: took 143.6848ms to wait for pod list to return data ...
	I0722 00:34:22.705822   13232 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:34:22.891767   13232 request.go:629] Waited for 185.8247ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/default/serviceaccounts
	I0722 00:34:22.891767   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/default/serviceaccounts
	I0722 00:34:22.891767   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.891767   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.891767   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.898410   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:22.898971   13232 default_sa.go:45] found service account: "default"
	I0722 00:34:22.898971   13232 default_sa.go:55] duration metric: took 193.1472ms for default service account to be created ...
	I0722 00:34:22.899030   13232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:34:23.095386   13232 request.go:629] Waited for 196.2966ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:34:23.095386   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:34:23.095386   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:23.095386   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:23.095386   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:23.105371   13232 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 00:34:23.115240   13232 system_pods.go:86] 17 kube-system pods found
	I0722 00:34:23.115240   13232 system_pods.go:89] "coredns-7db6d8ff4d-fwrd4" [3d8cf645-4238-4079-a401-18ff3ffdbf66] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "coredns-7db6d8ff4d-ndgcf" [ce30ed50-b5a7-4742-9f83-c60ecd47dc31] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "etcd-ha-474700" [b1ca44b2-3832-4a56-8bd1-c233907d8de3] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "etcd-ha-474700-m02" [f05d667f-c484-47ec-9be9-d5fe65452238] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "kindnet-kldv9" [01a2e280-762e-40bc-b79a-66e935b52f26] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "kindnet-xmjbz" [c65e9a3b-0f40-4424-af70-b56d7c04018c] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "kube-apiserver-ha-474700" [881080dc-0756-4d59-ae7f-9b1ed240dd5d] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "kube-apiserver-ha-474700-m02" [5906cda9-2d5a-486d-acc3-babb58a51586] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-controller-manager-ha-474700" [9bbed77b-5977-48a3-9816-d3734482dd9c] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-controller-manager-ha-474700-m02" [2e24aaa1-d708-451f-bf42-9d3b887463ea] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-proxy-fwkpc" [896d5fb8-be02-42a8-8ddf-260154a34162] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-proxy-kmnj9" [6a6597e3-9ae2-43cb-8838-ce01b1e9476f] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-scheduler-ha-474700" [fc771043-36f2-49a1-9675-b647b88f692b] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-scheduler-ha-474700-m02" [dd7e08b2-b3bf-4e32-8159-73bfeb9e1c33] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-vip-ha-474700" [f6aaa6ef-c03c-4ff3-889e-dc765c688373] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-vip-ha-474700-m02" [6c94d6e9-f93f-4971-ab0d-6978c39375df] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "storage-provisioner" [f289ea73-0be9-4a29-92d2-2897ee8972a6] Running
	I0722 00:34:23.115355   13232 system_pods.go:126] duration metric: took 216.3225ms to wait for k8s-apps to be running ...
	I0722 00:34:23.115355   13232 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:34:23.127417   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:34:23.160162   13232 system_svc.go:56] duration metric: took 44.8064ms WaitForService to wait for kubelet
	I0722 00:34:23.160162   13232 kubeadm.go:582] duration metric: took 27.8458012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:34:23.160284   13232 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:34:23.300291   13232 request.go:629] Waited for 139.5192ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes
	I0722 00:34:23.300291   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes
	I0722 00:34:23.300521   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:23.300521   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:23.300521   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:23.309393   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:34:23.310664   13232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:34:23.310734   13232 node_conditions.go:123] node cpu capacity is 2
	I0722 00:34:23.310734   13232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:34:23.310734   13232 node_conditions.go:123] node cpu capacity is 2
	I0722 00:34:23.310832   13232 node_conditions.go:105] duration metric: took 150.5463ms to run NodePressure ...
	I0722 00:34:23.310832   13232 start.go:241] waiting for startup goroutines ...
	I0722 00:34:23.310874   13232 start.go:255] writing updated cluster config ...
	I0722 00:34:23.315219   13232 out.go:177] 
	I0722 00:34:23.333985   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:34:23.333985   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:34:23.340026   13232 out.go:177] * Starting "ha-474700-m03" control-plane node in "ha-474700" cluster
	I0722 00:34:23.342562   13232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 00:34:23.342675   13232 cache.go:56] Caching tarball of preloaded images
	I0722 00:34:23.342675   13232 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 00:34:23.342675   13232 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 00:34:23.343299   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:34:23.350424   13232 start.go:360] acquireMachinesLock for ha-474700-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:34:23.350424   13232 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-474700-m03"
	I0722 00:34:23.351131   13232 start.go:93] Provisioning new machine with config: &{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.200.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:34:23.351131   13232 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0722 00:34:23.356153   13232 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 00:34:23.356548   13232 start.go:159] libmachine.API.Create for "ha-474700" (driver="hyperv")
	I0722 00:34:23.356611   13232 client.go:168] LocalClient.Create starting
	I0722 00:34:23.356611   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0722 00:34:23.357232   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:34:23.357232   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:34:23.357232   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0722 00:34:23.357232   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:34:23.357232   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:34:23.357232   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0722 00:34:25.394227   13232 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0722 00:34:25.394227   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:25.394227   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0722 00:34:27.226733   13232 main.go:141] libmachine: [stdout =====>] : False
	
	I0722 00:34:27.226733   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:27.227710   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:34:28.817089   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:34:28.817089   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:28.817089   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:34:32.691791   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:34:32.691836   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:32.693393   13232 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 00:34:33.123300   13232 main.go:141] libmachine: Creating SSH key...
	I0722 00:34:33.386554   13232 main.go:141] libmachine: Creating VM...
	I0722 00:34:33.386554   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:34:36.419129   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:34:36.420075   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:36.420218   13232 main.go:141] libmachine: Using switch "Default Switch"
	I0722 00:34:36.420307   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:34:38.254442   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:34:38.255454   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:38.255454   13232 main.go:141] libmachine: Creating VHD
	I0722 00:34:38.255766   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0722 00:34:42.166938   13232 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1F01A6BA-8DFA-4937-A9FD-1F86FE935E68
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0722 00:34:42.166938   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:42.166938   13232 main.go:141] libmachine: Writing magic tar header
	I0722 00:34:42.166938   13232 main.go:141] libmachine: Writing SSH key tar header
	I0722 00:34:42.179873   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0722 00:34:45.474466   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:34:45.474856   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:45.474938   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\disk.vhd' -SizeBytes 20000MB
	I0722 00:34:48.099119   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:34:48.099119   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:48.099607   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-474700-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0722 00:34:51.905240   13232 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-474700-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0722 00:34:51.905240   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:51.905240   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-474700-m03 -DynamicMemoryEnabled $false
	I0722 00:34:54.255014   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:34:54.256026   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:54.256026   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-474700-m03 -Count 2
	I0722 00:34:56.548325   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:34:56.548360   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:56.548459   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-474700-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\boot2docker.iso'
	I0722 00:34:59.259003   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:34:59.259003   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:59.259003   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-474700-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\disk.vhd'
	I0722 00:35:02.019167   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:02.019167   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:02.019167   13232 main.go:141] libmachine: Starting VM...
	I0722 00:35:02.019457   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-474700-m03
	I0722 00:35:05.309376   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:05.309376   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:05.309807   13232 main.go:141] libmachine: Waiting for host to start...
	I0722 00:35:05.309917   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:07.727223   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:07.727223   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:07.728041   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:10.419238   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:10.419484   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:11.421311   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:13.753573   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:13.753695   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:13.753695   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:16.361010   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:16.361010   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:17.375290   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:19.703675   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:19.703875   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:19.703981   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:22.383816   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:22.384600   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:23.385776   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:25.704946   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:25.704946   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:25.705090   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:28.429512   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:28.429512   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:29.443177   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:31.791613   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:31.791613   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:31.791860   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:34.477469   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:35:34.478086   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:34.478086   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:36.720307   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:36.720307   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:36.720477   13232 machine.go:94] provisionDockerMachine start ...
	I0722 00:35:36.720584   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:39.000916   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:39.000916   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:39.001186   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:41.632817   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:35:41.632934   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:41.638726   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:35:41.654435   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:35:41.654545   13232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:35:41.795334   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:35:41.795467   13232 buildroot.go:166] provisioning hostname "ha-474700-m03"
	I0722 00:35:41.795697   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:44.040199   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:44.040398   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:44.040398   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:46.690918   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:35:46.690918   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:46.697209   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:35:46.697408   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:35:46.697408   13232 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-474700-m03 && echo "ha-474700-m03" | sudo tee /etc/hostname
	I0722 00:35:46.866437   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-474700-m03
	
	I0722 00:35:46.866528   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:49.130201   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:49.130201   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:49.130790   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:51.854045   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:35:51.854347   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:51.860156   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:35:51.860909   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:35:51.860909   13232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-474700-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-474700-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-474700-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:35:52.020476   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:35:52.020476   13232 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0722 00:35:52.020584   13232 buildroot.go:174] setting up certificates
	I0722 00:35:52.020584   13232 provision.go:84] configureAuth start
	I0722 00:35:52.020655   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:54.256801   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:54.256801   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:54.257165   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:56.925731   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:35:56.925731   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:56.926099   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:59.201223   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:59.201223   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:59.202413   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:01.870489   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:01.870615   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:01.870615   13232 provision.go:143] copyHostCerts
	I0722 00:36:01.870776   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0722 00:36:01.871123   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0722 00:36:01.871123   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0722 00:36:01.871620   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0722 00:36:01.872777   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0722 00:36:01.873144   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0722 00:36:01.873208   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0722 00:36:01.873688   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0722 00:36:01.874321   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0722 00:36:01.874889   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0722 00:36:01.874889   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0722 00:36:01.875271   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0722 00:36:01.876368   13232 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-474700-m03 san=[127.0.0.1 172.28.196.120 ha-474700-m03 localhost minikube]
	I0722 00:36:02.112908   13232 provision.go:177] copyRemoteCerts
	I0722 00:36:02.124908   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:36:02.124908   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:04.336044   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:04.336044   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:04.336906   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:07.043170   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:07.043170   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:07.043170   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\id_rsa Username:docker}
	I0722 00:36:07.169898   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0444428s)
	I0722 00:36:07.169967   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0722 00:36:07.170458   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 00:36:07.215611   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0722 00:36:07.215611   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 00:36:07.262794   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0722 00:36:07.263373   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:36:07.312960   13232 provision.go:87] duration metric: took 15.2921453s to configureAuth
	I0722 00:36:07.312960   13232 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:36:07.313623   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:36:07.313623   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:09.599109   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:09.599109   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:09.599174   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:12.302682   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:12.303681   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:12.309238   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:36:12.309780   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:36:12.309780   13232 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 00:36:12.450442   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 00:36:12.450442   13232 buildroot.go:70] root file system type: tmpfs
	I0722 00:36:12.450792   13232 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 00:36:12.450960   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:14.677118   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:14.677118   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:14.677118   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:17.315714   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:17.316356   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:17.321739   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:36:17.322202   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:36:17.322354   13232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.196.103"
	Environment="NO_PROXY=172.28.196.103,172.28.200.182"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 00:36:17.487480   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.196.103
	Environment=NO_PROXY=172.28.196.103,172.28.200.182
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 00:36:17.487685   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:19.737704   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:19.737704   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:19.737704   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:22.414646   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:22.414766   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:22.420167   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:36:22.421138   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:36:22.421200   13232 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 00:36:24.713630   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 00:36:24.713630   13232 machine.go:97] duration metric: took 47.9925701s to provisionDockerMachine
	I0722 00:36:24.713630   13232 client.go:171] duration metric: took 2m1.3555374s to LocalClient.Create
	I0722 00:36:24.713630   13232 start.go:167] duration metric: took 2m1.355601s to libmachine.API.Create "ha-474700"
	I0722 00:36:24.713630   13232 start.go:293] postStartSetup for "ha-474700-m03" (driver="hyperv")
	I0722 00:36:24.713630   13232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:36:24.727972   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:36:24.727972   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:26.940838   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:26.940838   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:26.941378   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:29.631168   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:29.631383   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:29.631504   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\id_rsa Username:docker}
	I0722 00:36:29.736594   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0085611s)
	I0722 00:36:29.749465   13232 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:36:29.756356   13232 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:36:29.756493   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0722 00:36:29.756572   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0722 00:36:29.757890   13232 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0722 00:36:29.757890   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0722 00:36:29.769568   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:36:29.789216   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0722 00:36:29.836118   13232 start.go:296] duration metric: took 5.122426s for postStartSetup
	I0722 00:36:29.839300   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:32.065395   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:32.065439   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:32.065520   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:34.713090   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:34.713637   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:34.713931   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:36:34.716519   13232 start.go:128] duration metric: took 2m11.3637463s to createHost
	I0722 00:36:34.716519   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:36.933876   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:36.933876   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:36.933876   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:39.565291   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:39.565291   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:39.571840   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:36:39.572638   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:36:39.572638   13232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:36:39.708100   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721608599.724808444
	
	I0722 00:36:39.708212   13232 fix.go:216] guest clock: 1721608599.724808444
	I0722 00:36:39.708212   13232 fix.go:229] Guest: 2024-07-22 00:36:39.724808444 +0000 UTC Remote: 2024-07-22 00:36:34.7165199 +0000 UTC m=+595.638912801 (delta=5.008288544s)
	I0722 00:36:39.708212   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:41.916852   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:41.917908   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:41.917961   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:44.552831   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:44.552831   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:44.558744   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:36:44.559349   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:36:44.559349   13232 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721608599
	I0722 00:36:44.713095   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jul 22 00:36:39 UTC 2024
	
	I0722 00:36:44.713177   13232 fix.go:236] clock set: Mon Jul 22 00:36:39 UTC 2024
	 (err=<nil>)
	I0722 00:36:44.713177   13232 start.go:83] releasing machines lock for "ha-474700-m03", held for 2m21.3605001s
	I0722 00:36:44.713432   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:46.924525   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:46.924525   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:46.925070   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:49.598394   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:49.598394   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:49.601724   13232 out.go:177] * Found network options:
	I0722 00:36:49.603968   13232 out.go:177]   - NO_PROXY=172.28.196.103,172.28.200.182
	W0722 00:36:49.606780   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 00:36:49.606780   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 00:36:49.609202   13232 out.go:177]   - NO_PROXY=172.28.196.103,172.28.200.182
	W0722 00:36:49.611812   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 00:36:49.611846   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 00:36:49.612298   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 00:36:49.612298   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 00:36:49.616247   13232 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0722 00:36:49.616380   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:49.627120   13232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 00:36:49.627120   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:51.932258   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:51.932258   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:51.932258   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:51.972481   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:51.972481   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:51.972559   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:54.775594   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:54.775697   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:54.776000   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\id_rsa Username:docker}
	I0722 00:36:54.803485   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:54.804081   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:54.804455   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\id_rsa Username:docker}
	I0722 00:36:54.873927   13232 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2575819s)
	W0722 00:36:54.874104   13232 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0722 00:36:54.907679   13232 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2804958s)
	W0722 00:36:54.907679   13232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:36:54.919797   13232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:36:54.950791   13232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:36:54.950791   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:36:54.950791   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0722 00:36:54.990421   13232 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0722 00:36:54.990421   13232 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0722 00:36:55.000715   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 00:36:55.034751   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 00:36:55.060227   13232 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 00:36:55.071665   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 00:36:55.103004   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:36:55.135635   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 00:36:55.168288   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:36:55.199840   13232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:36:55.232583   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 00:36:55.268384   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 00:36:55.299531   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 00:36:55.329997   13232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:36:55.362518   13232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:36:55.393645   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:36:55.609056   13232 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 00:36:55.642610   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:36:55.654767   13232 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 00:36:55.692272   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:36:55.732504   13232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:36:55.772466   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:36:55.807707   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:36:55.842765   13232 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 00:36:55.902280   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:36:55.925612   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:36:55.973614   13232 ssh_runner.go:195] Run: which cri-dockerd
	I0722 00:36:55.989732   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 00:36:56.007499   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 00:36:56.050425   13232 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 00:36:56.250577   13232 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 00:36:56.453617   13232 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 00:36:56.453617   13232 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 00:36:56.504327   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:36:56.707125   13232 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 00:36:59.313427   13232 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6062709s)
	I0722 00:36:59.325120   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 00:36:59.363158   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:36:59.399933   13232 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 00:36:59.603909   13232 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 00:36:59.823122   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:37:00.028193   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 00:37:00.068537   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:37:00.110280   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:37:00.318684   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 00:37:00.430119   13232 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 00:37:00.443023   13232 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 00:37:00.451863   13232 start.go:563] Will wait 60s for crictl version
	I0722 00:37:00.463543   13232 ssh_runner.go:195] Run: which crictl
	I0722 00:37:00.479465   13232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:37:00.538583   13232 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 00:37:00.549369   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:37:00.594728   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:37:00.637511   13232 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 00:37:00.643968   13232 out.go:177]   - env NO_PROXY=172.28.196.103
	I0722 00:37:00.646205   13232 out.go:177]   - env NO_PROXY=172.28.196.103,172.28.200.182
	I0722 00:37:00.649291   13232 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0722 00:37:00.652750   13232 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0722 00:37:00.652750   13232 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0722 00:37:00.653779   13232 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0722 00:37:00.653779   13232 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e8:0a:ec Flags:up|broadcast|multicast|running}
	I0722 00:37:00.656696   13232 ip.go:210] interface addr: fe80::cedd:59ec:4db2:d0bf/64
	I0722 00:37:00.656696   13232 ip.go:210] interface addr: 172.28.192.1/20
	I0722 00:37:00.668281   13232 ssh_runner.go:195] Run: grep 172.28.192.1	host.minikube.internal$ /etc/hosts
	I0722 00:37:00.674832   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:37:00.697489   13232 mustload.go:65] Loading cluster: ha-474700
	I0722 00:37:00.698191   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:37:00.698869   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:37:02.872420   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:37:02.872420   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:37:02.872420   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:37:02.874108   13232 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700 for IP: 172.28.196.120
	I0722 00:37:02.874108   13232 certs.go:194] generating shared ca certs ...
	I0722 00:37:02.874200   13232 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:37:02.874466   13232 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0722 00:37:02.875107   13232 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0722 00:37:02.875202   13232 certs.go:256] generating profile certs ...
	I0722 00:37:02.875556   13232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.key
	I0722 00:37:02.875556   13232 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.0446dbb5
	I0722 00:37:02.876311   13232 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.0446dbb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.196.103 172.28.200.182 172.28.196.120 172.28.207.254]
	I0722 00:37:03.305979   13232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.0446dbb5 ...
	I0722 00:37:03.305979   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.0446dbb5: {Name:mkdaa609a243c04f8e19fadeebb19c304ceabc4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:37:03.307551   13232 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.0446dbb5 ...
	I0722 00:37:03.307551   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.0446dbb5: {Name:mke4bb823a6cb6ba99c36a4f3e04a4b18f7f04a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:37:03.308122   13232 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.0446dbb5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt
	I0722 00:37:03.323123   13232 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.0446dbb5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key
	I0722 00:37:03.324114   13232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key
	I0722 00:37:03.324114   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 00:37:03.324114   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0722 00:37:03.324853   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 00:37:03.325074   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 00:37:03.325168   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 00:37:03.325423   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 00:37:03.325556   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 00:37:03.325690   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 00:37:03.326182   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem (1338 bytes)
	W0722 00:37:03.326539   13232 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100_empty.pem, impossibly tiny 0 bytes
	I0722 00:37:03.326678   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0722 00:37:03.327089   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0722 00:37:03.327490   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0722 00:37:03.327882   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0722 00:37:03.328426   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem (1708 bytes)
	I0722 00:37:03.328682   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:37:03.328913   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem -> /usr/share/ca-certificates/5100.pem
	I0722 00:37:03.329086   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /usr/share/ca-certificates/51002.pem
	I0722 00:37:03.329295   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:37:05.596764   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:37:05.597074   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:37:05.597074   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:37:08.268556   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:37:08.268556   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:37:08.269481   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:37:08.368656   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0722 00:37:08.375810   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0722 00:37:08.412061   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0722 00:37:08.418523   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0722 00:37:08.450426   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0722 00:37:08.456396   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0722 00:37:08.488560   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0722 00:37:08.494568   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0722 00:37:08.533867   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0722 00:37:08.540722   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0722 00:37:08.573009   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0722 00:37:08.580294   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0722 00:37:08.600750   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:37:08.650658   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:37:08.700308   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:37:08.748824   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 00:37:08.799916   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0722 00:37:08.847179   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:37:08.893251   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:37:08.942053   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:37:08.988247   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:37:09.034533   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem --> /usr/share/ca-certificates/5100.pem (1338 bytes)
	I0722 00:37:09.084353   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /usr/share/ca-certificates/51002.pem (1708 bytes)
	I0722 00:37:09.131526   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0722 00:37:09.164583   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0722 00:37:09.195841   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0722 00:37:09.228611   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0722 00:37:09.266275   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0722 00:37:09.298934   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0722 00:37:09.334701   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0722 00:37:09.382221   13232 ssh_runner.go:195] Run: openssl version
	I0722 00:37:09.402243   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:37:09.434432   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:37:09.441388   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:37:09.454195   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:37:09.476834   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:37:09.507828   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5100.pem && ln -fs /usr/share/ca-certificates/5100.pem /etc/ssl/certs/5100.pem"
	I0722 00:37:09.540792   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5100.pem
	I0722 00:37:09.548223   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:45 /usr/share/ca-certificates/5100.pem
	I0722 00:37:09.560855   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5100.pem
	I0722 00:37:09.582630   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5100.pem /etc/ssl/certs/51391683.0"
	I0722 00:37:09.616481   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51002.pem && ln -fs /usr/share/ca-certificates/51002.pem /etc/ssl/certs/51002.pem"
	I0722 00:37:09.652231   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51002.pem
	I0722 00:37:09.659905   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:45 /usr/share/ca-certificates/51002.pem
	I0722 00:37:09.675056   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51002.pem
	I0722 00:37:09.695983   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51002.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:37:09.730578   13232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:37:09.736721   13232 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 00:37:09.736721   13232 kubeadm.go:934] updating node {m03 172.28.196.120 8443 v1.30.3 docker true true} ...
	I0722 00:37:09.736721   13232 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-474700-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.196.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:37:09.736721   13232 kube-vip.go:115] generating kube-vip config ...
	I0722 00:37:09.749142   13232 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 00:37:09.780407   13232 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 00:37:09.780557   13232 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 00:37:09.792701   13232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:37:09.809384   13232 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0722 00:37:09.821178   13232 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0722 00:37:09.838950   13232 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0722 00:37:09.839117   13232 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0722 00:37:09.839117   13232 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0722 00:37:09.839340   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 00:37:09.839424   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 00:37:09.853647   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:37:09.856286   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 00:37:09.857238   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 00:37:09.879596   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 00:37:09.879596   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0722 00:37:09.879720   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0722 00:37:09.879879   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0722 00:37:09.879974   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0722 00:37:09.892202   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 00:37:09.943022   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0722 00:37:09.943022   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0722 00:37:11.307393   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0722 00:37:11.326748   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0722 00:37:11.359910   13232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:37:11.390159   13232 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0722 00:37:11.448634   13232 ssh_runner.go:195] Run: grep 172.28.207.254	control-plane.minikube.internal$ /etc/hosts
	I0722 00:37:11.455556   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:37:11.489914   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:37:11.696729   13232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:37:11.728415   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:37:11.729296   13232 start.go:317] joinCluster: &{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.200.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.196.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:37:11.729296   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0722 00:37:11.729296   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:37:13.948953   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:37:13.948953   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:37:13.948953   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:37:16.621499   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:37:16.621499   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:37:16.622237   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:37:16.846791   13232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1174336s)
	I0722 00:37:16.846861   13232 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.196.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:37:16.846861   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zb3440.bxdiafssaaw02ybu --discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-474700-m03 --control-plane --apiserver-advertise-address=172.28.196.120 --apiserver-bind-port=8443"
	I0722 00:38:02.410490   13232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zb3440.bxdiafssaaw02ybu --discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-474700-m03 --control-plane --apiserver-advertise-address=172.28.196.120 --apiserver-bind-port=8443": (45.562521s)
	I0722 00:38:02.410561   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0722 00:38:03.269624   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-474700-m03 minikube.k8s.io/updated_at=2024_07_22T00_38_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-474700 minikube.k8s.io/primary=false
	I0722 00:38:03.487278   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-474700-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0722 00:38:03.665811   13232 start.go:319] duration metric: took 51.9358938s to joinCluster
	I0722 00:38:03.665811   13232 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.28.196.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:38:03.666859   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:38:03.668852   13232 out.go:177] * Verifying Kubernetes components...
	I0722 00:38:03.685256   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:38:04.113288   13232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:38:04.144694   13232 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:38:04.145335   13232 kapi.go:59] client config for ha-474700: &rest.Config{Host:"https://172.28.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2085e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0722 00:38:04.145335   13232 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.207.254:8443 with https://172.28.196.103:8443
	I0722 00:38:04.146904   13232 node_ready.go:35] waiting up to 6m0s for node "ha-474700-m03" to be "Ready" ...
	I0722 00:38:04.147085   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:04.147126   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:04.147126   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:04.147155   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:04.160448   13232 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0722 00:38:04.648809   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:04.648809   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:04.648809   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:04.648809   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:04.655027   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:05.152771   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:05.153032   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:05.153032   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:05.153032   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:05.157231   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:05.662243   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:05.662998   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:05.662998   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:05.662998   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:05.668909   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:06.152095   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:06.152366   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:06.152366   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:06.152366   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:06.172595   13232 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0722 00:38:06.174473   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:06.658843   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:06.658843   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:06.659092   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:06.659092   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:06.666843   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:38:07.148263   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:07.148263   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:07.148263   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:07.148263   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:07.152609   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:07.655165   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:07.655165   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:07.655165   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:07.655165   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:07.660852   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:08.158685   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:08.158685   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:08.159011   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:08.159011   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:08.163160   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:08.661908   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:08.662002   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:08.662002   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:08.662002   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:08.667587   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:08.668405   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:09.153011   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:09.153011   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:09.153011   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:09.153011   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:09.158456   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:09.657814   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:09.657814   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:09.657814   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:09.657814   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:09.662816   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:10.149246   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:10.149246   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:10.149246   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:10.149619   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:10.158961   13232 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 00:38:10.657915   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:10.657915   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:10.657915   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:10.657915   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:10.666161   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:38:11.147483   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:11.147483   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:11.147483   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:11.147483   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:11.262163   13232 round_trippers.go:574] Response Status: 200 OK in 114 milliseconds
	I0722 00:38:11.262740   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:11.650649   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:11.650649   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:11.650649   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:11.651008   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:11.655572   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:12.156098   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:12.156337   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:12.156337   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:12.156545   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:12.162223   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:12.657701   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:12.657701   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:12.657701   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:12.657701   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:12.662286   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:13.149241   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:13.149364   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:13.149364   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:13.149364   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:13.161562   13232 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0722 00:38:13.653853   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:13.653918   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:13.653918   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:13.653918   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:13.658666   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:13.659291   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:14.159724   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:14.159724   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:14.159724   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:14.159724   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:14.165102   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:14.648902   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:14.648902   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:14.648902   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:14.649030   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:14.653257   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:15.154609   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:15.154609   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:15.154609   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:15.154609   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:15.159229   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:15.654999   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:15.655133   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:15.655133   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:15.655133   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:15.660544   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:15.661307   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:16.154592   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:16.154592   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:16.154592   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:16.154708   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:16.159210   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:16.656779   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:16.656779   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:16.656779   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:16.656779   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:16.662583   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:17.159558   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:17.159558   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:17.159558   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:17.159558   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:17.166313   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:17.660634   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:17.660634   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:17.660634   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:17.660634   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:17.666208   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:17.667111   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:18.147893   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:18.147893   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:18.147893   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:18.147893   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:18.163443   13232 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0722 00:38:18.649657   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:18.649657   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:18.649657   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:18.649955   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:18.655607   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:19.148473   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:19.148473   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:19.148473   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:19.148473   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:19.153134   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:19.652721   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:19.652721   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:19.652721   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:19.652721   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:19.658383   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:20.155622   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:20.155884   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:20.155884   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:20.155884   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:20.161233   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:20.163077   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:20.657723   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:20.657723   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:20.657723   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:20.658075   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:20.662896   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:21.158846   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:21.158846   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:21.158846   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:21.159070   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:21.163846   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:21.649649   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:21.649725   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:21.649725   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:21.649725   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:21.655022   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:22.149107   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:22.149192   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:22.149192   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:22.149192   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:22.153671   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:22.652291   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:22.652291   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:22.652291   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:22.652291   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:22.656605   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:22.657884   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:23.152509   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:23.152509   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:23.152509   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:23.152509   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:23.158168   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:23.652818   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:23.652925   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:23.652925   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:23.652925   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:23.658592   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:24.153018   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:24.153018   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:24.153107   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:24.153107   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:24.157770   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:24.650942   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:24.651039   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:24.651039   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:24.651097   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:24.655573   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.151260   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:25.151260   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.151260   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.151260   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.156749   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:25.157911   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:25.650272   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:25.650272   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.650466   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.650466   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.654928   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.655999   13232 node_ready.go:49] node "ha-474700-m03" has status "Ready":"True"
	I0722 00:38:25.655999   13232 node_ready.go:38] duration metric: took 21.5088052s for node "ha-474700-m03" to be "Ready" ...
	I0722 00:38:25.655999   13232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:38:25.655999   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:38:25.655999   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.655999   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.655999   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.666013   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:38:25.677288   13232 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.677288   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fwrd4
	I0722 00:38:25.677288   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.677288   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.677288   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.682375   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.684475   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:25.684475   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.684537   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.684537   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.687968   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:38:25.689212   13232 pod_ready.go:92] pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:25.689212   13232 pod_ready.go:81] duration metric: took 11.9237ms for pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.689978   13232 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.689978   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ndgcf
	I0722 00:38:25.689978   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.689978   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.689978   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.696177   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:25.696837   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:25.696837   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.696837   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.696837   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.701254   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.701254   13232 pod_ready.go:92] pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:25.702039   13232 pod_ready.go:81] duration metric: took 12.0603ms for pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.702039   13232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.702039   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700
	I0722 00:38:25.702164   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.702216   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.702216   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.705478   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:38:25.706161   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:25.706161   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.706161   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.706161   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.710859   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.710859   13232 pod_ready.go:92] pod "etcd-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:25.710859   13232 pod_ready.go:81] duration metric: took 8.8205ms for pod "etcd-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.710859   13232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.710859   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700-m02
	I0722 00:38:25.710859   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.710859   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.710859   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.715711   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.716817   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:25.716884   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.716884   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.716884   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.733660   13232 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0722 00:38:25.734689   13232 pod_ready.go:92] pod "etcd-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:25.734689   13232 pod_ready.go:81] duration metric: took 23.8299ms for pod "etcd-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.734689   13232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.855541   13232 request.go:629] Waited for 120.8499ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700-m03
	I0722 00:38:25.855541   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700-m03
	I0722 00:38:25.855824   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.855824   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.855824   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.860262   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:26.059985   13232 request.go:629] Waited for 198.175ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:26.059985   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:26.059985   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:26.059985   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:26.059985   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:26.065503   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:26.066795   13232 pod_ready.go:92] pod "etcd-ha-474700-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:26.066873   13232 pod_ready.go:81] duration metric: took 332.1792ms for pod "etcd-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:26.066949   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:26.263208   13232 request.go:629] Waited for 196.0499ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700
	I0722 00:38:26.263456   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700
	I0722 00:38:26.263456   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:26.263456   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:26.263456   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:26.271700   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:38:26.465216   13232 request.go:629] Waited for 192.388ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:26.465606   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:26.465606   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:26.465606   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:26.465606   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:26.472019   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:26.472582   13232 pod_ready.go:92] pod "kube-apiserver-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:26.472582   13232 pod_ready.go:81] duration metric: took 405.6286ms for pod "kube-apiserver-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:26.472644   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:26.654049   13232 request.go:629] Waited for 181.3433ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m02
	I0722 00:38:26.654049   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m02
	I0722 00:38:26.654049   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:26.654049   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:26.654049   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:26.660062   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:26.858098   13232 request.go:629] Waited for 196.2789ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:26.858272   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:26.858272   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:26.858272   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:26.858272   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:26.863587   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:26.865239   13232 pod_ready.go:92] pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:26.865298   13232 pod_ready.go:81] duration metric: took 392.6494ms for pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:26.865298   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:27.060426   13232 request.go:629] Waited for 194.8577ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m03
	I0722 00:38:27.060567   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m03
	I0722 00:38:27.060567   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:27.060567   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:27.060567   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:27.065028   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:27.263598   13232 request.go:629] Waited for 197.467ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:27.263710   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:27.263710   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:27.263806   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:27.263806   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:27.268161   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:27.268795   13232 pod_ready.go:92] pod "kube-apiserver-ha-474700-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:27.268795   13232 pod_ready.go:81] duration metric: took 403.4917ms for pod "kube-apiserver-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:27.268795   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:27.451204   13232 request.go:629] Waited for 182.4074ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700
	I0722 00:38:27.451204   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700
	I0722 00:38:27.451204   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:27.451204   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:27.451204   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:27.456969   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:27.656031   13232 request.go:629] Waited for 197.407ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:27.656252   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:27.656252   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:27.656364   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:27.656364   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:27.660710   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:27.662347   13232 pod_ready.go:92] pod "kube-controller-manager-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:27.662441   13232 pod_ready.go:81] duration metric: took 393.6413ms for pod "kube-controller-manager-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:27.662441   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:27.858197   13232 request.go:629] Waited for 195.5505ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m02
	I0722 00:38:27.858308   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m02
	I0722 00:38:27.858308   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:27.858308   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:27.858308   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:27.862849   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:28.061784   13232 request.go:629] Waited for 198.0686ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:28.061784   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:28.061784   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:28.061784   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:28.061784   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:28.071647   13232 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 00:38:28.072917   13232 pod_ready.go:92] pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:28.073004   13232 pod_ready.go:81] duration metric: took 410.5587ms for pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:28.073004   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:28.252122   13232 request.go:629] Waited for 178.685ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m03
	I0722 00:38:28.252347   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m03
	I0722 00:38:28.252347   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:28.252347   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:28.252347   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:28.256968   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:28.455908   13232 request.go:629] Waited for 196.1174ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:28.456015   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:28.456015   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:28.456015   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:28.456015   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:28.461828   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:28.463523   13232 pod_ready.go:92] pod "kube-controller-manager-ha-474700-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:28.463523   13232 pod_ready.go:81] duration metric: took 390.514ms for pod "kube-controller-manager-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:28.463577   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwkpc" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:28.658191   13232 request.go:629] Waited for 194.6112ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkpc
	I0722 00:38:28.658466   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkpc
	I0722 00:38:28.658466   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:28.658466   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:28.658466   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:28.667133   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:38:28.862682   13232 request.go:629] Waited for 193.8701ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:28.862927   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:28.863084   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:28.863084   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:28.863084   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:28.868753   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:28.869679   13232 pod_ready.go:92] pod "kube-proxy-fwkpc" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:28.869791   13232 pod_ready.go:81] duration metric: took 406.0971ms for pod "kube-proxy-fwkpc" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:28.869791   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kmnj9" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:29.064862   13232 request.go:629] Waited for 194.7807ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kmnj9
	I0722 00:38:29.064958   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kmnj9
	I0722 00:38:29.064958   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:29.064958   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:29.064958   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:29.070321   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:29.252245   13232 request.go:629] Waited for 180.7835ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:29.252434   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:29.252434   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:29.252565   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:29.252565   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:29.256385   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:38:29.258077   13232 pod_ready.go:92] pod "kube-proxy-kmnj9" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:29.258149   13232 pod_ready.go:81] duration metric: took 388.3538ms for pod "kube-proxy-kmnj9" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:29.258149   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xzxkz" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:29.456485   13232 request.go:629] Waited for 198.2597ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzxkz
	I0722 00:38:29.456643   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzxkz
	I0722 00:38:29.456643   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:29.456643   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:29.456643   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:29.462353   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:29.662357   13232 request.go:629] Waited for 198.5163ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:29.662357   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:29.662357   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:29.662357   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:29.662357   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:29.667976   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:29.669124   13232 pod_ready.go:92] pod "kube-proxy-xzxkz" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:29.669192   13232 pod_ready.go:81] duration metric: took 411.0378ms for pod "kube-proxy-xzxkz" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:29.669192   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:29.864338   13232 request.go:629] Waited for 194.8009ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700
	I0722 00:38:29.864466   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700
	I0722 00:38:29.864466   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:29.864466   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:29.864466   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:29.870052   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:30.055490   13232 request.go:629] Waited for 184.0146ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:30.055703   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:30.055703   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.055760   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.055760   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.063124   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:38:30.064011   13232 pod_ready.go:92] pod "kube-scheduler-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:30.064122   13232 pod_ready.go:81] duration metric: took 394.9251ms for pod "kube-scheduler-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:30.064122   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:30.262255   13232 request.go:629] Waited for 197.9449ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m02
	I0722 00:38:30.262367   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m02
	I0722 00:38:30.262487   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.262547   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.262547   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.267792   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:30.451747   13232 request.go:629] Waited for 182.0925ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:30.451845   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:30.451978   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.451978   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.451978   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.457166   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:30.458502   13232 pod_ready.go:92] pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:30.458585   13232 pod_ready.go:81] duration metric: took 394.4583ms for pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:30.458585   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:30.654602   13232 request.go:629] Waited for 195.7114ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m03
	I0722 00:38:30.654860   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m03
	I0722 00:38:30.654860   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.655078   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.655078   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.660425   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:30.856691   13232 request.go:629] Waited for 195.1049ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:30.856822   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:30.856822   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.856985   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.856985   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.862334   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:30.863509   13232 pod_ready.go:92] pod "kube-scheduler-ha-474700-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:30.863634   13232 pod_ready.go:81] duration metric: took 404.9188ms for pod "kube-scheduler-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:30.863634   13232 pod_ready.go:38] duration metric: took 5.2075731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:38:30.863634   13232 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:38:30.875327   13232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:38:30.901804   13232 api_server.go:72] duration metric: took 27.2353441s to wait for apiserver process to appear ...
	I0722 00:38:30.901804   13232 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:38:30.901944   13232 api_server.go:253] Checking apiserver healthz at https://172.28.196.103:8443/healthz ...
	I0722 00:38:30.909898   13232 api_server.go:279] https://172.28.196.103:8443/healthz returned 200:
	ok
	I0722 00:38:30.910064   13232 round_trippers.go:463] GET https://172.28.196.103:8443/version
	I0722 00:38:30.910155   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.910155   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.910155   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.910842   13232 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0722 00:38:30.911665   13232 api_server.go:141] control plane version: v1.30.3
	I0722 00:38:30.911779   13232 api_server.go:131] duration metric: took 9.9742ms to wait for apiserver health ...
	I0722 00:38:30.911779   13232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:38:31.060191   13232 request.go:629] Waited for 148.4108ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:38:31.060481   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:38:31.060573   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:31.060573   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:31.060725   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:31.071032   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:38:31.081495   13232 system_pods.go:59] 24 kube-system pods found
	I0722 00:38:31.081495   13232 system_pods.go:61] "coredns-7db6d8ff4d-fwrd4" [3d8cf645-4238-4079-a401-18ff3ffdbf66] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "coredns-7db6d8ff4d-ndgcf" [ce30ed50-b5a7-4742-9f83-c60ecd47dc31] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "etcd-ha-474700" [b1ca44b2-3832-4a56-8bd1-c233907d8de3] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "etcd-ha-474700-m02" [f05d667f-c484-47ec-9be9-d5fe65452238] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "etcd-ha-474700-m03" [55948e51-5624-4969-ad9a-d702816407a6] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kindnet-kldv9" [01a2e280-762e-40bc-b79a-66e935b52f26] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kindnet-mtsts" [099a5306-0035-412a-9219-316d036b0f9e] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kindnet-xmjbz" [c65e9a3b-0f40-4424-af70-b56d7c04018c] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-apiserver-ha-474700" [881080dc-0756-4d59-ae7f-9b1ed240dd5d] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-apiserver-ha-474700-m02" [5906cda9-2d5a-486d-acc3-babb58a51586] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-apiserver-ha-474700-m03" [d47c41fd-ba5e-4754-aa37-8a6f88d5b346] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-controller-manager-ha-474700" [9bbed77b-5977-48a3-9816-d3734482dd9c] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-controller-manager-ha-474700-m02" [2e24aaa1-d708-451f-bf42-9d3b887463ea] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-controller-manager-ha-474700-m03" [6c1370c7-fc72-43f8-af93-7dd0d04fed14] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-proxy-fwkpc" [896d5fb8-be02-42a8-8ddf-260154a34162] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-proxy-kmnj9" [6a6597e3-9ae2-43cb-8838-ce01b1e9476f] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-proxy-xzxkz" [a0af0ee7-b83e-436d-9b25-04642314576a] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-scheduler-ha-474700" [fc771043-36f2-49a1-9675-b647b88f692b] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-scheduler-ha-474700-m02" [dd7e08b2-b3bf-4e32-8159-73bfeb9e1c33] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-scheduler-ha-474700-m03" [221c1654-b31a-4a72-8e3e-8659b9dff52f] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-vip-ha-474700" [f6aaa6ef-c03c-4ff3-889e-dc765c688373] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-vip-ha-474700-m02" [6c94d6e9-f93f-4971-ab0d-6978c39375df] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-vip-ha-474700-m03" [f9fac82c-283d-4b13-9a7e-7a20d90262fa] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "storage-provisioner" [f289ea73-0be9-4a29-92d2-2897ee8972a6] Running
	I0722 00:38:31.081495   13232 system_pods.go:74] duration metric: took 169.7145ms to wait for pod list to return data ...
	I0722 00:38:31.081495   13232 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:38:31.262477   13232 request.go:629] Waited for 180.74ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/default/serviceaccounts
	I0722 00:38:31.262477   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/default/serviceaccounts
	I0722 00:38:31.262477   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:31.262477   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:31.262477   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:31.267621   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:31.268770   13232 default_sa.go:45] found service account: "default"
	I0722 00:38:31.268770   13232 default_sa.go:55] duration metric: took 187.2724ms for default service account to be created ...
	I0722 00:38:31.268770   13232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:38:31.452400   13232 request.go:629] Waited for 183.4589ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:38:31.452761   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:38:31.452908   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:31.452908   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:31.452908   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:31.463047   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:38:31.474374   13232 system_pods.go:86] 24 kube-system pods found
	I0722 00:38:31.474374   13232 system_pods.go:89] "coredns-7db6d8ff4d-fwrd4" [3d8cf645-4238-4079-a401-18ff3ffdbf66] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "coredns-7db6d8ff4d-ndgcf" [ce30ed50-b5a7-4742-9f83-c60ecd47dc31] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "etcd-ha-474700" [b1ca44b2-3832-4a56-8bd1-c233907d8de3] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "etcd-ha-474700-m02" [f05d667f-c484-47ec-9be9-d5fe65452238] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "etcd-ha-474700-m03" [55948e51-5624-4969-ad9a-d702816407a6] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kindnet-kldv9" [01a2e280-762e-40bc-b79a-66e935b52f26] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kindnet-mtsts" [099a5306-0035-412a-9219-316d036b0f9e] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kindnet-xmjbz" [c65e9a3b-0f40-4424-af70-b56d7c04018c] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-apiserver-ha-474700" [881080dc-0756-4d59-ae7f-9b1ed240dd5d] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-apiserver-ha-474700-m02" [5906cda9-2d5a-486d-acc3-babb58a51586] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-apiserver-ha-474700-m03" [d47c41fd-ba5e-4754-aa37-8a6f88d5b346] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-controller-manager-ha-474700" [9bbed77b-5977-48a3-9816-d3734482dd9c] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-controller-manager-ha-474700-m02" [2e24aaa1-d708-451f-bf42-9d3b887463ea] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-controller-manager-ha-474700-m03" [6c1370c7-fc72-43f8-af93-7dd0d04fed14] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-proxy-fwkpc" [896d5fb8-be02-42a8-8ddf-260154a34162] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-proxy-kmnj9" [6a6597e3-9ae2-43cb-8838-ce01b1e9476f] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-proxy-xzxkz" [a0af0ee7-b83e-436d-9b25-04642314576a] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-scheduler-ha-474700" [fc771043-36f2-49a1-9675-b647b88f692b] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-scheduler-ha-474700-m02" [dd7e08b2-b3bf-4e32-8159-73bfeb9e1c33] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-scheduler-ha-474700-m03" [221c1654-b31a-4a72-8e3e-8659b9dff52f] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-vip-ha-474700" [f6aaa6ef-c03c-4ff3-889e-dc765c688373] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-vip-ha-474700-m02" [6c94d6e9-f93f-4971-ab0d-6978c39375df] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-vip-ha-474700-m03" [f9fac82c-283d-4b13-9a7e-7a20d90262fa] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "storage-provisioner" [f289ea73-0be9-4a29-92d2-2897ee8972a6] Running
	I0722 00:38:31.474374   13232 system_pods.go:126] duration metric: took 205.6019ms to wait for k8s-apps to be running ...
	I0722 00:38:31.474374   13232 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:38:31.485374   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:38:31.510377   13232 system_svc.go:56] duration metric: took 36.0026ms WaitForService to wait for kubelet
	I0722 00:38:31.510597   13232 kubeadm.go:582] duration metric: took 27.8441299s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:38:31.510719   13232 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:38:31.657078   13232 request.go:629] Waited for 146.2671ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes
	I0722 00:38:31.657329   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes
	I0722 00:38:31.657329   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:31.657329   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:31.657329   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:31.663727   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:31.665735   13232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:38:31.665807   13232 node_conditions.go:123] node cpu capacity is 2
	I0722 00:38:31.665807   13232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:38:31.665807   13232 node_conditions.go:123] node cpu capacity is 2
	I0722 00:38:31.665807   13232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:38:31.665807   13232 node_conditions.go:123] node cpu capacity is 2
	I0722 00:38:31.665807   13232 node_conditions.go:105] duration metric: took 155.0858ms to run NodePressure ...
	I0722 00:38:31.665807   13232 start.go:241] waiting for startup goroutines ...
	I0722 00:38:31.665921   13232 start.go:255] writing updated cluster config ...
	I0722 00:38:31.678463   13232 ssh_runner.go:195] Run: rm -f paused
	I0722 00:38:31.826090   13232 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:38:31.830185   13232 out.go:177] * Done! kubectl is now configured to use "ha-474700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 22 00:30:29 ha-474700 cri-dockerd[1325]: time="2024-07-22T00:30:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b86a30eb1eabb843ab7b5b96b3ebcc7c996a734cec7dc1700c62159d7f231585/resolv.conf as [nameserver 172.28.192.1]"
	Jul 22 00:30:29 ha-474700 cri-dockerd[1325]: time="2024-07-22T00:30:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/84ea3ad80e043b4ca97319e460c8ea0e48342bb3572f4ed3e13443d422bfda00/resolv.conf as [nameserver 172.28.192.1]"
	Jul 22 00:30:29 ha-474700 cri-dockerd[1325]: time="2024-07-22T00:30:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7ef53d69b643b18ca967c22e3a84238afb9e399517b835c7fd62ca9d8875c26c/resolv.conf as [nameserver 172.28.192.1]"
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.379606934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.379928335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.379941535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.380081436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.435349551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.435569952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.435796453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.436932157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.584403433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.585152436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.585256236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.585833238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:39:11 ha-474700 dockerd[1429]: time="2024-07-22T00:39:11.731733898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 00:39:11 ha-474700 dockerd[1429]: time="2024-07-22T00:39:11.731850101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 00:39:11 ha-474700 dockerd[1429]: time="2024-07-22T00:39:11.731865501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:39:11 ha-474700 dockerd[1429]: time="2024-07-22T00:39:11.733063532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:39:11 ha-474700 cri-dockerd[1325]: time="2024-07-22T00:39:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/534ecae774bcffc01be9307d0d62b2037a07352cd25b841ecf7efc05df8cdefb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 22 00:39:13 ha-474700 cri-dockerd[1325]: time="2024-07-22T00:39:13Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 22 00:39:14 ha-474700 dockerd[1429]: time="2024-07-22T00:39:14.184328553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 00:39:14 ha-474700 dockerd[1429]: time="2024-07-22T00:39:14.184504556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 00:39:14 ha-474700 dockerd[1429]: time="2024-07-22T00:39:14.184539656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:39:14 ha-474700 dockerd[1429]: time="2024-07-22T00:39:14.185147065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6d688317ae329       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   534ecae774bcf       busybox-fc5497c4f-tdwp8
	0563e68a100e2       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   7ef53d69b643b       coredns-7db6d8ff4d-fwrd4
	a3f532f981c0c       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   b86a30eb1eabb       coredns-7db6d8ff4d-ndgcf
	685d0f839c603       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   84ea3ad80e043       storage-provisioner
	711176f77704c       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              10 minutes ago       Running             kindnet-cni               0                   f4275a0b2de2d       kindnet-kldv9
	a27150ded0e0a       55bb025d2cfa5                                                                                         10 minutes ago       Running             kube-proxy                0                   a68b96dd366f4       kube-proxy-fwkpc
	a044134e73300       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   246609b5f9dbc       kube-vip-ha-474700
	c45f67167207b       76932a3b37d7e                                                                                         10 minutes ago       Running             kube-controller-manager   0                   a2e406356cc8e       kube-controller-manager-ha-474700
	95bfa6ffee0da       1f6d574d502f3                                                                                         10 minutes ago       Running             kube-apiserver            0                   06585129fa08d       kube-apiserver-ha-474700
	e7c1294e244eb       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   308a38a9ec1f7       etcd-ha-474700
	2ada486ec6f81       3edc18e7b7672                                                                                         10 minutes ago       Running             kube-scheduler            0                   34fbd34d9f618       kube-scheduler-ha-474700
	
	
	==> coredns [0563e68a100e] <==
	[INFO] 10.244.1.2:56844 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217904s
	[INFO] 10.244.1.2:42720 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114602s
	[INFO] 10.244.1.2:48423 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127602s
	[INFO] 10.244.2.2:51009 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.122329269s
	[INFO] 10.244.2.2:50985 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063501s
	[INFO] 10.244.2.2:54792 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140302s
	[INFO] 10.244.0.4:48442 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161902s
	[INFO] 10.244.0.4:41654 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000149602s
	[INFO] 10.244.0.4:37935 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212103s
	[INFO] 10.244.0.4:57981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148702s
	[INFO] 10.244.0.4:36890 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218703s
	[INFO] 10.244.1.2:46368 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129201s
	[INFO] 10.244.1.2:52507 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166502s
	[INFO] 10.244.2.2:46303 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151502s
	[INFO] 10.244.2.2:39484 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119901s
	[INFO] 10.244.2.2:49091 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000333205s
	[INFO] 10.244.0.4:34379 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000255204s
	[INFO] 10.244.0.4:40009 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156502s
	[INFO] 10.244.0.4:43280 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084201s
	[INFO] 10.244.1.2:42604 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000418606s
	[INFO] 10.244.1.2:55399 - 5 "PTR IN 1.192.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107802s
	[INFO] 10.244.2.2:60910 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000259704s
	[INFO] 10.244.2.2:35394 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093001s
	[INFO] 10.244.0.4:53757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000344805s
	[INFO] 10.244.0.4:39593 - 5 "PTR IN 1.192.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138302s
	
	
	==> coredns [a3f532f981c0] <==
	[INFO] 10.244.2.2:39288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248603s
	[INFO] 10.244.2.2:41768 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000093301s
	[INFO] 10.244.0.4:50057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110001s
	[INFO] 10.244.0.4:42132 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000172902s
	[INFO] 10.244.1.2:59888 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.069351904s
	[INFO] 10.244.1.2:58401 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01106196s
	[INFO] 10.244.1.2:58793 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133202s
	[INFO] 10.244.2.2:46512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113201s
	[INFO] 10.244.2.2:45345 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146702s
	[INFO] 10.244.2.2:34632 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000208403s
	[INFO] 10.244.2.2:60032 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125102s
	[INFO] 10.244.2.2:52448 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073901s
	[INFO] 10.244.0.4:59425 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000132702s
	[INFO] 10.244.0.4:43894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000346705s
	[INFO] 10.244.0.4:53758 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192903s
	[INFO] 10.244.1.2:55849 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199003s
	[INFO] 10.244.1.2:39483 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065101s
	[INFO] 10.244.2.2:46288 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000305505s
	[INFO] 10.244.0.4:52757 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154902s
	[INFO] 10.244.1.2:60576 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138802s
	[INFO] 10.244.1.2:59529 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126602s
	[INFO] 10.244.2.2:53578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111201s
	[INFO] 10.244.2.2:42314 - 5 "PTR IN 1.192.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100402s
	[INFO] 10.244.0.4:33210 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141502s
	[INFO] 10.244.0.4:54648 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000250803s
	
	
	==> describe nodes <==
	Name:               ha-474700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-474700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-474700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_29_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:29:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-474700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:40:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:39:20 +0000   Mon, 22 Jul 2024 00:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:39:20 +0000   Mon, 22 Jul 2024 00:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:39:20 +0000   Mon, 22 Jul 2024 00:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:39:20 +0000   Mon, 22 Jul 2024 00:30:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.196.103
	  Hostname:    ha-474700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d98a6c0c392f4a15a63a5b53be6383b5
	  System UUID:                2196853a-367c-da49-b3ac-104a8a9fbc62
	  Boot ID:                    563f6506-5094-4515-a320-c46c5ead8804
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tdwp8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 coredns-7db6d8ff4d-fwrd4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 coredns-7db6d8ff4d-ndgcf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-ha-474700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-kldv9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-474700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-474700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-fwkpc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-474700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-474700                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-474700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-474700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-474700 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node ha-474700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node ha-474700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node ha-474700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-474700 event: Registered Node ha-474700 in Controller
	  Normal  NodeReady                9m50s              kubelet          Node ha-474700 status is now: NodeReady
	  Normal  RegisteredNode           6m7s               node-controller  Node ha-474700 event: Registered Node ha-474700 in Controller
	  Normal  RegisteredNode           2m                 node-controller  Node ha-474700 event: Registered Node ha-474700 in Controller
	
	
	Name:               ha-474700-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-474700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-474700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T00_33_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:33:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-474700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:40:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:39:26 +0000   Mon, 22 Jul 2024 00:33:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:39:26 +0000   Mon, 22 Jul 2024 00:33:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:39:26 +0000   Mon, 22 Jul 2024 00:33:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:39:26 +0000   Mon, 22 Jul 2024 00:34:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.200.182
	  Hostname:    ha-474700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 daaef73654c84245b42fb87bf31f1432
	  System UUID:                a1ae7714-0c6b-5449-ade5-9be8a5aaaf08
	  Boot ID:                    297ec4e9-29de-4325-9090-d4818bd0aa55
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7fbtz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 etcd-ha-474700-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m25s
	  kube-system                 kindnet-xmjbz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m29s
	  kube-system                 kube-apiserver-ha-474700-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	  kube-system                 kube-controller-manager-ha-474700-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	  kube-system                 kube-proxy-kmnj9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-scheduler-ha-474700-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	  kube-system                 kube-vip-ha-474700-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m29s (x8 over 6m29s)  kubelet          Node ha-474700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s (x8 over 6m29s)  kubelet          Node ha-474700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s (x7 over 6m29s)  kubelet          Node ha-474700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m25s                  node-controller  Node ha-474700-m02 event: Registered Node ha-474700-m02 in Controller
	  Normal  RegisteredNode           6m7s                   node-controller  Node ha-474700-m02 event: Registered Node ha-474700-m02 in Controller
	  Normal  RegisteredNode           2m                     node-controller  Node ha-474700-m02 event: Registered Node ha-474700-m02 in Controller
	
	
	Name:               ha-474700-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-474700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-474700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T00_38_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:37:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-474700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:40:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:39:28 +0000   Mon, 22 Jul 2024 00:37:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:39:28 +0000   Mon, 22 Jul 2024 00:37:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:39:28 +0000   Mon, 22 Jul 2024 00:37:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:39:28 +0000   Mon, 22 Jul 2024 00:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.196.120
	  Hostname:    ha-474700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eab7c03af4a4829bc8269f525cc3f3b
	  System UUID:                6ff0cb45-3d8f-714a-9d6b-b1501828e840
	  Boot ID:                    9782621a-c0ae-42bf-a72e-cf6b6ea91f67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sv6jt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 etcd-ha-474700-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m18s
	  kube-system                 kindnet-mtsts                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m22s
	  kube-system                 kube-apiserver-ha-474700-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-controller-manager-ha-474700-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-proxy-xzxkz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 kube-scheduler-ha-474700-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-vip-ha-474700-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node ha-474700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node ha-474700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m22s (x7 over 2m22s)  kubelet          Node ha-474700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m21s                  node-controller  Node ha-474700-m03 event: Registered Node ha-474700-m03 in Controller
	  Normal  RegisteredNode           2m18s                  node-controller  Node ha-474700-m03 event: Registered Node ha-474700-m03 in Controller
	  Normal  RegisteredNode           2m1s                   node-controller  Node ha-474700-m03 event: Registered Node ha-474700-m03 in Controller
	
	
	==> dmesg <==
	[  +1.125706] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.711796] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul22 00:28] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.172384] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[Jul22 00:29] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.104317] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.589506] systemd-fstab-generator[1039]: Ignoring "noauto" option for root device
	[  +0.201888] systemd-fstab-generator[1051]: Ignoring "noauto" option for root device
	[  +0.264501] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +2.962303] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.221043] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.212375] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.288709] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[ +11.462353] systemd-fstab-generator[1414]: Ignoring "noauto" option for root device
	[  +0.111118] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.935810] systemd-fstab-generator[1665]: Ignoring "noauto" option for root device
	[  +5.612328] systemd-fstab-generator[1859]: Ignoring "noauto" option for root device
	[  +0.108389] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.498692] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.725616] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[Jul22 00:30] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.498783] kauditd_printk_skb: 29 callbacks suppressed
	[Jul22 00:33] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [e7c1294e244e] <==
	{"level":"warn","ts":"2024-07-22T00:37:58.561279Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://172.28.196.120:2380/version","remote-member-id":"2385b76ee203ad8d","error":"Get \"https://172.28.196.120:2380/version\": dial tcp 172.28.196.120:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T00:37:58.561402Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2385b76ee203ad8d","error":"Get \"https://172.28.196.120:2380/version\": dial tcp 172.28.196.120:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-22T00:37:59.097173Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"2385b76ee203ad8d","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-22T00:37:59.429372Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2385b76ee203ad8d"}
	{"level":"info","ts":"2024-07-22T00:37:59.447174Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"cc1ca7429916c5c2","remote-peer-id":"2385b76ee203ad8d"}
	{"level":"info","ts":"2024-07-22T00:37:59.449248Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"cc1ca7429916c5c2","remote-peer-id":"2385b76ee203ad8d"}
	{"level":"info","ts":"2024-07-22T00:37:59.579534Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"cc1ca7429916c5c2","to":"2385b76ee203ad8d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-22T00:37:59.580094Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"cc1ca7429916c5c2","remote-peer-id":"2385b76ee203ad8d"}
	{"level":"info","ts":"2024-07-22T00:37:59.63849Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"cc1ca7429916c5c2","to":"2385b76ee203ad8d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-22T00:37:59.638853Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"cc1ca7429916c5c2","remote-peer-id":"2385b76ee203ad8d"}
	{"level":"warn","ts":"2024-07-22T00:38:00.093854Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"2385b76ee203ad8d","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-22T00:38:00.415346Z","caller":"traceutil/trace.go:171","msg":"trace[618954313] transaction","detail":"{read_only:false; response_revision:1575; number_of_response:1; }","duration":"185.557745ms","start":"2024-07-22T00:38:00.229771Z","end":"2024-07-22T00:38:00.415329Z","steps":["trace[618954313] 'process raft request'  (duration: 185.309345ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:38:00.427374Z","caller":"traceutil/trace.go:171","msg":"trace[98723372] transaction","detail":"{read_only:false; response_revision:1576; number_of_response:1; }","duration":"193.929352ms","start":"2024-07-22T00:38:00.23343Z","end":"2024-07-22T00:38:00.427359Z","steps":["trace[98723372] 'process raft request'  (duration: 193.839952ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:38:01.094735Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"2385b76ee203ad8d","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-22T00:38:02.230423Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"2385b76ee203ad8d","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"2.321458ms"}
	{"level":"warn","ts":"2024-07-22T00:38:02.230547Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"1479c42f94363f26","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"2.449459ms"}
	{"level":"info","ts":"2024-07-22T00:38:02.231302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cc1ca7429916c5c2 switched to configuration voters=(1475426061569638182 2559653650096172429 14707814387563283906)"}
	{"level":"info","ts":"2024-07-22T00:38:02.231379Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"40d59ada9744fea2","local-member-id":"cc1ca7429916c5c2"}
	{"level":"info","ts":"2024-07-22T00:38:02.231408Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"cc1ca7429916c5c2","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"2385b76ee203ad8d"}
	{"level":"info","ts":"2024-07-22T00:38:10.030149Z","caller":"traceutil/trace.go:171","msg":"trace[1654675101] transaction","detail":"{read_only:false; response_revision:1636; number_of_response:1; }","duration":"147.436615ms","start":"2024-07-22T00:38:09.882694Z","end":"2024-07-22T00:38:10.03013Z","steps":["trace[1654675101] 'process raft request'  (duration: 147.248914ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:38:11.281631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.820685ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-474700-m03\" ","response":"range_response_count:1 size:4443"}
	{"level":"info","ts":"2024-07-22T00:38:11.281711Z","caller":"traceutil/trace.go:171","msg":"trace[625399608] range","detail":"{range_begin:/registry/minions/ha-474700-m03; range_end:; response_count:1; response_revision:1639; }","duration":"109.942185ms","start":"2024-07-22T00:38:11.171754Z","end":"2024-07-22T00:38:11.281696Z","steps":["trace[625399608] 'range keys from in-memory index tree'  (duration: 108.077184ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:39:43.422186Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1088}
	{"level":"info","ts":"2024-07-22T00:39:43.591596Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1088,"took":"164.294896ms","hash":2717625788,"current-db-size-bytes":3698688,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2220032,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-22T00:39:43.592605Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2717625788,"revision":1088,"compact-revision":-1}
	
	
	==> kernel <==
	 00:40:18 up 12 min,  0 users,  load average: 0.87, 0.92, 0.52
	Linux ha-474700 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [711176f77704] <==
	I0722 00:39:33.344595       1 main.go:322] Node ha-474700-m02 has CIDR [10.244.1.0/24] 
	I0722 00:39:43.348586       1 main.go:295] Handling node with IPs: map[172.28.196.103:{}]
	I0722 00:39:43.348630       1 main.go:299] handling current node
	I0722 00:39:43.348648       1 main.go:295] Handling node with IPs: map[172.28.200.182:{}]
	I0722 00:39:43.348655       1 main.go:322] Node ha-474700-m02 has CIDR [10.244.1.0/24] 
	I0722 00:39:43.349235       1 main.go:295] Handling node with IPs: map[172.28.196.120:{}]
	I0722 00:39:43.349291       1 main.go:322] Node ha-474700-m03 has CIDR [10.244.2.0/24] 
	I0722 00:39:53.352844       1 main.go:295] Handling node with IPs: map[172.28.200.182:{}]
	I0722 00:39:53.352949       1 main.go:322] Node ha-474700-m02 has CIDR [10.244.1.0/24] 
	I0722 00:39:53.353489       1 main.go:295] Handling node with IPs: map[172.28.196.120:{}]
	I0722 00:39:53.353574       1 main.go:322] Node ha-474700-m03 has CIDR [10.244.2.0/24] 
	I0722 00:39:53.353792       1 main.go:295] Handling node with IPs: map[172.28.196.103:{}]
	I0722 00:39:53.353879       1 main.go:299] handling current node
	I0722 00:40:03.346108       1 main.go:295] Handling node with IPs: map[172.28.196.103:{}]
	I0722 00:40:03.346213       1 main.go:299] handling current node
	I0722 00:40:03.346297       1 main.go:295] Handling node with IPs: map[172.28.200.182:{}]
	I0722 00:40:03.346313       1 main.go:322] Node ha-474700-m02 has CIDR [10.244.1.0/24] 
	I0722 00:40:03.346545       1 main.go:295] Handling node with IPs: map[172.28.196.120:{}]
	I0722 00:40:03.346645       1 main.go:322] Node ha-474700-m03 has CIDR [10.244.2.0/24] 
	I0722 00:40:13.344347       1 main.go:295] Handling node with IPs: map[172.28.200.182:{}]
	I0722 00:40:13.344381       1 main.go:322] Node ha-474700-m02 has CIDR [10.244.1.0/24] 
	I0722 00:40:13.344572       1 main.go:295] Handling node with IPs: map[172.28.196.120:{}]
	I0722 00:40:13.344586       1 main.go:322] Node ha-474700-m03 has CIDR [10.244.2.0/24] 
	I0722 00:40:13.344657       1 main.go:295] Handling node with IPs: map[172.28.196.103:{}]
	I0722 00:40:13.344666       1 main.go:299] handling current node
	
	
	==> kube-apiserver [95bfa6ffee0d] <==
	I0722 00:29:49.405417       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 00:29:49.429211       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0722 00:29:49.447744       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 00:30:02.464325       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0722 00:30:02.573467       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0722 00:37:57.290284       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0722 00:37:57.290701       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0722 00:37:57.290573       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 54µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0722 00:37:57.293743       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0722 00:37:57.294357       1 timeout.go:142] post-timeout activity - time-elapsed: 3.992103ms, PATCH "/api/v1/namespaces/default/events/ha-474700-m03.17e461f83c298cd4" result: <nil>
	E0722 00:39:18.085777       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52856: use of closed network connection
	E0722 00:39:18.809688       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52859: use of closed network connection
	E0722 00:39:19.383212       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52861: use of closed network connection
	E0722 00:39:19.995786       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52863: use of closed network connection
	E0722 00:39:20.683121       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52866: use of closed network connection
	E0722 00:39:21.240715       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52868: use of closed network connection
	E0722 00:39:21.767076       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52870: use of closed network connection
	E0722 00:39:22.321549       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52872: use of closed network connection
	E0722 00:39:22.843289       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52874: use of closed network connection
	E0722 00:39:23.804296       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52877: use of closed network connection
	E0722 00:39:34.332152       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52879: use of closed network connection
	E0722 00:39:34.836855       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52881: use of closed network connection
	E0722 00:39:45.395271       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52884: use of closed network connection
	E0722 00:39:45.914483       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52886: use of closed network connection
	E0722 00:39:56.442349       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52888: use of closed network connection
	
	
	==> kube-controller-manager [c45f67167207] <==
	I0722 00:33:48.609417       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-474700-m02\" does not exist"
	I0722 00:33:48.618512       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-474700-m02" podCIDRs=["10.244.1.0/24"]
	I0722 00:33:52.456042       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-474700-m02"
	I0722 00:37:56.448355       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-474700-m03\" does not exist"
	I0722 00:37:56.461275       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-474700-m03" podCIDRs=["10.244.2.0/24"]
	I0722 00:37:57.502182       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-474700-m03"
	I0722 00:39:10.717175       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.301517ms"
	I0722 00:39:10.769463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.628245ms"
	I0722 00:39:10.771600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.603µs"
	I0722 00:39:10.791296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.701µs"
	I0722 00:39:10.815403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.302µs"
	I0722 00:39:10.817576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.301µs"
	I0722 00:39:11.053359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="228.639652ms"
	I0722 00:39:11.360312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="306.905169ms"
	E0722 00:39:11.360831       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0722 00:39:11.361185       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="240.806µs"
	I0722 00:39:11.367103       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="296.108µs"
	I0722 00:39:11.550191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.450792ms"
	I0722 00:39:11.550883       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.603µs"
	I0722 00:39:14.171082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.015963ms"
	I0722 00:39:14.171298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.7µs"
	I0722 00:39:15.228108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.038065ms"
	I0722 00:39:15.228210       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.401µs"
	I0722 00:39:15.425715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.929941ms"
	I0722 00:39:15.426308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="139.903µs"
	
	
	==> kube-proxy [a27150ded0e0] <==
	I0722 00:30:04.493455       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:30:04.508739       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.196.103"]
	I0722 00:30:04.570836       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 00:30:04.571071       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:30:04.571096       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:30:04.575694       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:30:04.576111       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:30:04.576148       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:30:04.577611       1 config.go:192] "Starting service config controller"
	I0722 00:30:04.577654       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:30:04.577681       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:30:04.577686       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:30:04.580788       1 config.go:319] "Starting node config controller"
	I0722 00:30:04.580875       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:30:04.678860       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:30:04.679163       1 shared_informer.go:320] Caches are synced for service config
	I0722 00:30:04.681671       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2ada486ec6f8] <==
	W0722 00:29:46.492984       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 00:29:46.493022       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 00:29:46.541243       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 00:29:46.541581       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 00:29:46.577124       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 00:29:46.577304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 00:29:46.601134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 00:29:46.601352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 00:29:46.604566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 00:29:46.605518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 00:29:46.777890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 00:29:46.778190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 00:29:46.854101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 00:29:46.854389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0722 00:29:46.928343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:29:46.928446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:29:46.973290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:29:46.973499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 00:29:46.993061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:29:46.993162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:29:47.037323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 00:29:47.037432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0722 00:29:47.077030       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 00:29:47.077126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0722 00:29:48.387482       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 00:35:49 ha-474700 kubelet[2361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:35:49 ha-474700 kubelet[2361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:36:49 ha-474700 kubelet[2361]: E0722 00:36:49.601084    2361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:36:49 ha-474700 kubelet[2361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:36:49 ha-474700 kubelet[2361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:36:49 ha-474700 kubelet[2361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:36:49 ha-474700 kubelet[2361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:37:49 ha-474700 kubelet[2361]: E0722 00:37:49.593491    2361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:37:49 ha-474700 kubelet[2361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:37:49 ha-474700 kubelet[2361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:37:49 ha-474700 kubelet[2361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:37:49 ha-474700 kubelet[2361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:38:49 ha-474700 kubelet[2361]: E0722 00:38:49.592579    2361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:38:49 ha-474700 kubelet[2361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:38:49 ha-474700 kubelet[2361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:38:49 ha-474700 kubelet[2361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:38:49 ha-474700 kubelet[2361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:39:10 ha-474700 kubelet[2361]: I0722 00:39:10.724668    2361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ndgcf" podStartSLOduration=548.724644924 podStartE2EDuration="9m8.724644924s" podCreationTimestamp="2024-07-22 00:30:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-22 00:30:30.753252254 +0000 UTC m=+41.427905499" watchObservedRunningTime="2024-07-22 00:39:10.724644924 +0000 UTC m=+561.399298169"
	Jul 22 00:39:10 ha-474700 kubelet[2361]: I0722 00:39:10.725057    2361 topology_manager.go:215] "Topology Admit Handler" podUID="c958ee73-d31c-47e4-82d5-3784cc7c4cec" podNamespace="default" podName="busybox-fc5497c4f-tdwp8"
	Jul 22 00:39:10 ha-474700 kubelet[2361]: I0722 00:39:10.844644    2361 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpx5m\" (UniqueName: \"kubernetes.io/projected/c958ee73-d31c-47e4-82d5-3784cc7c4cec-kube-api-access-gpx5m\") pod \"busybox-fc5497c4f-tdwp8\" (UID: \"c958ee73-d31c-47e4-82d5-3784cc7c4cec\") " pod="default/busybox-fc5497c4f-tdwp8"
	Jul 22 00:39:49 ha-474700 kubelet[2361]: E0722 00:39:49.593251    2361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:39:49 ha-474700 kubelet[2361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:39:49 ha-474700 kubelet[2361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:39:49 ha-474700 kubelet[2361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:39:49 ha-474700 kubelet[2361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:40:09.385530   13104 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-474700 -n ha-474700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-474700 -n ha-474700: (12.8451948s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-474700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (70.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (39.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-474700 node stop m02 -v=7 --alsologtostderr: exit status 1 (2.8115574s)

                                                
                                                
-- stdout --
	* Stopping node "ha-474700-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:56:36.380168    4944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0722 00:56:36.471225    4944 out.go:291] Setting OutFile to fd 900 ...
	I0722 00:56:36.490236    4944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:56:36.490236    4944 out.go:304] Setting ErrFile to fd 464...
	I0722 00:56:36.490236    4944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:56:36.507770    4944 mustload.go:65] Loading cluster: ha-474700
	I0722 00:56:36.509062    4944 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:56:36.509062    4944 stop.go:39] StopHost: ha-474700-m02
	I0722 00:56:36.512589    4944 out.go:177] * Stopping node "ha-474700-m02"  ...
	I0722 00:56:36.515418    4944 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 00:56:36.525716    4944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 00:56:36.525716    4944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:56:38.824680    4944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:56:38.824829    4944 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:56:38.824928    4944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-windows-amd64.exe -p ha-474700 node stop m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-474700 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:372: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-474700 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-474700 -n ha-474700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-474700 -n ha-474700: (13.0040958s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 logs -n 25: (9.1200341s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt                                                                       | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:51 UTC | 22 Jul 24 00:52 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4265906700\001\cp-test_ha-474700-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n                                                                                                          | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:52 UTC | 22 Jul 24 00:52 UTC |
	|         | ha-474700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt                                                                       | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:52 UTC | 22 Jul 24 00:52 UTC |
	|         | ha-474700:/home/docker/cp-test_ha-474700-m03_ha-474700.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n                                                                                                          | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:52 UTC | 22 Jul 24 00:52 UTC |
	|         | ha-474700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n ha-474700 sudo cat                                                                                       | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:52 UTC | 22 Jul 24 00:52 UTC |
	|         | /home/docker/cp-test_ha-474700-m03_ha-474700.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt                                                                       | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:52 UTC | 22 Jul 24 00:53 UTC |
	|         | ha-474700-m02:/home/docker/cp-test_ha-474700-m03_ha-474700-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n                                                                                                          | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:53 UTC | 22 Jul 24 00:53 UTC |
	|         | ha-474700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n ha-474700-m02 sudo cat                                                                                   | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:53 UTC | 22 Jul 24 00:53 UTC |
	|         | /home/docker/cp-test_ha-474700-m03_ha-474700-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt                                                                       | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:53 UTC | 22 Jul 24 00:53 UTC |
	|         | ha-474700-m04:/home/docker/cp-test_ha-474700-m03_ha-474700-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n                                                                                                          | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:53 UTC | 22 Jul 24 00:53 UTC |
	|         | ha-474700-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n ha-474700-m04 sudo cat                                                                                   | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:53 UTC | 22 Jul 24 00:54 UTC |
	|         | /home/docker/cp-test_ha-474700-m03_ha-474700-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-474700 cp testdata\cp-test.txt                                                                                         | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:54 UTC | 22 Jul 24 00:54 UTC |
	|         | ha-474700-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n                                                                                                          | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:54 UTC | 22 Jul 24 00:54 UTC |
	|         | ha-474700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt                                                                       | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:54 UTC | 22 Jul 24 00:54 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4265906700\001\cp-test_ha-474700-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n                                                                                                          | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:54 UTC | 22 Jul 24 00:54 UTC |
	|         | ha-474700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt                                                                       | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:54 UTC | 22 Jul 24 00:55 UTC |
	|         | ha-474700:/home/docker/cp-test_ha-474700-m04_ha-474700.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n                                                                                                          | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:55 UTC | 22 Jul 24 00:55 UTC |
	|         | ha-474700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n ha-474700 sudo cat                                                                                       | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:55 UTC | 22 Jul 24 00:55 UTC |
	|         | /home/docker/cp-test_ha-474700-m04_ha-474700.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt                                                                       | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:55 UTC | 22 Jul 24 00:55 UTC |
	|         | ha-474700-m02:/home/docker/cp-test_ha-474700-m04_ha-474700-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n                                                                                                          | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:55 UTC | 22 Jul 24 00:55 UTC |
	|         | ha-474700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n ha-474700-m02 sudo cat                                                                                   | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:55 UTC | 22 Jul 24 00:55 UTC |
	|         | /home/docker/cp-test_ha-474700-m04_ha-474700-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt                                                                       | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:55 UTC | 22 Jul 24 00:56 UTC |
	|         | ha-474700-m03:/home/docker/cp-test_ha-474700-m04_ha-474700-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n                                                                                                          | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:56 UTC | 22 Jul 24 00:56 UTC |
	|         | ha-474700-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-474700 ssh -n ha-474700-m03 sudo cat                                                                                   | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:56 UTC | 22 Jul 24 00:56 UTC |
	|         | /home/docker/cp-test_ha-474700-m04_ha-474700-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-474700 node stop m02 -v=7                                                                                              | ha-474700 | minikube6\jenkins | v1.33.1 | 22 Jul 24 00:56 UTC |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 00:26:39
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 00:26:39.221971   13232 out.go:291] Setting OutFile to fd 464 ...
	I0722 00:26:39.223984   13232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:26:39.223984   13232 out.go:304] Setting ErrFile to fd 612...
	I0722 00:26:39.223984   13232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:26:39.245984   13232 out.go:298] Setting JSON to false
	I0722 00:26:39.247973   13232 start.go:129] hostinfo: {"hostname":"minikube6","uptime":123206,"bootTime":1721484792,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0722 00:26:39.248984   13232 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 00:26:39.256984   13232 out.go:177] * [ha-474700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0722 00:26:39.260974   13232 notify.go:220] Checking for updates...
	I0722 00:26:39.260974   13232 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:26:39.263973   13232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:26:39.265972   13232 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0722 00:26:39.268973   13232 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:26:39.271983   13232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:26:39.274973   13232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 00:26:44.712926   13232 out.go:177] * Using the hyperv driver based on user configuration
	I0722 00:26:44.718207   13232 start.go:297] selected driver: hyperv
	I0722 00:26:44.718207   13232 start.go:901] validating driver "hyperv" against <nil>
	I0722 00:26:44.718207   13232 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 00:26:44.767662   13232 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 00:26:44.768392   13232 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:26:44.768392   13232 cni.go:84] Creating CNI manager for ""
	I0722 00:26:44.768392   13232 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0722 00:26:44.768392   13232 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 00:26:44.769053   13232 start.go:340] cluster config:
	{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:26:44.769053   13232 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 00:26:44.777816   13232 out.go:177] * Starting "ha-474700" primary control-plane node in "ha-474700" cluster
	I0722 00:26:44.783914   13232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 00:26:44.783914   13232 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0722 00:26:44.783914   13232 cache.go:56] Caching tarball of preloaded images
	I0722 00:26:44.783914   13232 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 00:26:44.783914   13232 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 00:26:44.784940   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:26:44.784940   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json: {Name:mk591e8a86ee287de6657a04867487c561e834a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:26:44.786199   13232 start.go:360] acquireMachinesLock for ha-474700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:26:44.787102   13232 start.go:364] duration metric: took 902.3µs to acquireMachinesLock for "ha-474700"
	I0722 00:26:44.787244   13232 start.go:93] Provisioning new machine with config: &{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:26:44.787244   13232 start.go:125] createHost starting for "" (driver="hyperv")
	I0722 00:26:44.795860   13232 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 00:26:44.795860   13232 start.go:159] libmachine.API.Create for "ha-474700" (driver="hyperv")
	I0722 00:26:44.795860   13232 client.go:168] LocalClient.Create starting
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:26:44.796847   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:26:44.796847   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0722 00:26:46.920893   13232 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0722 00:26:46.920946   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:46.920946   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0722 00:26:48.669298   13232 main.go:141] libmachine: [stdout =====>] : False
	
	I0722 00:26:48.670076   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:48.670076   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:26:50.173577   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:26:50.173666   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:50.173757   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:26:53.798258   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:26:53.798258   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:53.800988   13232 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 00:26:54.254936   13232 main.go:141] libmachine: Creating SSH key...
	I0722 00:26:54.443700   13232 main.go:141] libmachine: Creating VM...
	I0722 00:26:54.443700   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:26:57.305451   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:26:57.305713   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:57.305713   13232 main.go:141] libmachine: Using switch "Default Switch"
	I0722 00:26:57.305851   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:26:59.088860   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:26:59.089142   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:26:59.089176   13232 main.go:141] libmachine: Creating VHD
	I0722 00:26:59.089176   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0722 00:27:02.923386   13232 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D35E6BF3-D7D2-4C13-8EE5-3CEC4F188D51
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0722 00:27:02.923823   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:02.923823   13232 main.go:141] libmachine: Writing magic tar header
	I0722 00:27:02.924019   13232 main.go:141] libmachine: Writing SSH key tar header
	I0722 00:27:02.935276   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0722 00:27:06.147738   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:06.147738   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:06.148743   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\disk.vhd' -SizeBytes 20000MB
	I0722 00:27:08.709897   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:08.709897   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:08.710839   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-474700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0722 00:27:12.428220   13232 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-474700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0722 00:27:12.428390   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:12.428390   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-474700 -DynamicMemoryEnabled $false
	I0722 00:27:14.732980   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:14.732980   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:14.732980   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-474700 -Count 2
	I0722 00:27:16.938752   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:16.938980   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:16.939107   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-474700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\boot2docker.iso'
	I0722 00:27:19.550560   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:19.551520   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:19.551789   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-474700 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\disk.vhd'
	I0722 00:27:22.226866   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:22.226866   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:22.227433   13232 main.go:141] libmachine: Starting VM...
	I0722 00:27:22.227490   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-474700
	I0722 00:27:25.488148   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:25.488148   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:25.488940   13232 main.go:141] libmachine: Waiting for host to start...
	I0722 00:27:25.488940   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:27.825709   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:27.825709   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:27.825709   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:27:30.481873   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:30.482667   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:31.497892   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:33.735810   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:33.736842   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:33.736959   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:27:36.315419   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:36.315818   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:37.316708   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:39.521874   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:39.522409   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:39.522451   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:27:42.052425   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:42.052470   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:43.054279   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:45.305228   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:45.305228   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:45.306085   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:27:47.843247   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:27:47.843698   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:48.846426   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:51.147932   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:51.147932   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:51.148637   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:27:53.711926   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:27:53.711926   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:53.712832   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:55.883791   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:55.883791   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:55.883791   13232 machine.go:94] provisionDockerMachine start ...
	I0722 00:27:55.884063   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:27:58.079188   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:27:58.079188   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:27:58.079188   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:00.674656   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:00.674708   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:00.680461   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:00.691523   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:00.691523   13232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:28:00.826889   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:28:00.826958   13232 buildroot.go:166] provisioning hostname "ha-474700"
	I0722 00:28:00.827068   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:02.985917   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:02.985917   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:02.986696   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:05.554276   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:05.555309   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:05.560666   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:05.560666   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:05.560666   13232 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-474700 && echo "ha-474700" | sudo tee /etc/hostname
	I0722 00:28:05.738716   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-474700
	
	I0722 00:28:05.739039   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:07.909364   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:07.909364   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:07.910200   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:10.503709   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:10.503709   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:10.509909   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:10.510666   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:10.510666   13232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-474700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-474700/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-474700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:28:10.663807   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:28:10.663893   13232 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0722 00:28:10.663967   13232 buildroot.go:174] setting up certificates
	I0722 00:28:10.663967   13232 provision.go:84] configureAuth start
	I0722 00:28:10.663967   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:12.848991   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:12.849404   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:12.849483   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:15.393747   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:15.394014   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:15.394096   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:17.545945   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:17.546499   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:17.546617   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:20.109287   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:20.110099   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:20.110099   13232 provision.go:143] copyHostCerts
	I0722 00:28:20.110311   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0722 00:28:20.110690   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0722 00:28:20.110690   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0722 00:28:20.110946   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0722 00:28:20.112773   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0722 00:28:20.112928   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0722 00:28:20.112928   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0722 00:28:20.112928   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0722 00:28:20.114199   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0722 00:28:20.114199   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0722 00:28:20.114199   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0722 00:28:20.114939   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0722 00:28:20.116125   13232 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-474700 san=[127.0.0.1 172.28.196.103 ha-474700 localhost minikube]
	I0722 00:28:20.476100   13232 provision.go:177] copyRemoteCerts
	I0722 00:28:20.487026   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:28:20.487026   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:22.712406   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:22.712406   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:22.713351   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:25.285459   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:25.286635   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:25.287250   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:28:25.405119   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9179962s)
	I0722 00:28:25.405119   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0722 00:28:25.405402   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 00:28:25.452581   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0722 00:28:25.453174   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0722 00:28:25.495841   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0722 00:28:25.495979   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:28:25.543350   13232 provision.go:87] duration metric: took 14.8791943s to configureAuth
	I0722 00:28:25.543350   13232 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:28:25.544592   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:28:25.544889   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:27.737730   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:27.737730   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:27.737938   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:30.314881   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:30.314881   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:30.320666   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:30.321233   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:30.321389   13232 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 00:28:30.451894   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 00:28:30.451894   13232 buildroot.go:70] root file system type: tmpfs
	I0722 00:28:30.452092   13232 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 00:28:30.452233   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:32.620772   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:32.621849   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:32.621878   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:35.214956   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:35.215306   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:35.221015   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:35.221227   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:35.221227   13232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 00:28:35.387450   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 00:28:35.387450   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:37.552753   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:37.552983   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:37.553147   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:40.121107   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:40.121969   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:40.127709   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:40.128456   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:40.128515   13232 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 00:28:42.395668   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 00:28:42.395668   13232 machine.go:97] duration metric: took 46.5112859s to provisionDockerMachine
	I0722 00:28:42.395958   13232 client.go:171] duration metric: took 1m57.5983198s to LocalClient.Create
	I0722 00:28:42.395958   13232 start.go:167] duration metric: took 1m57.5986096s to libmachine.API.Create "ha-474700"
	I0722 00:28:42.395958   13232 start.go:293] postStartSetup for "ha-474700" (driver="hyperv")
	I0722 00:28:42.395958   13232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:28:42.408866   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:28:42.408866   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:44.540655   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:44.540655   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:44.540890   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:47.136808   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:47.136808   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:47.137290   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:28:47.246528   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8376007s)
	I0722 00:28:47.257655   13232 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:28:47.267997   13232 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:28:47.267997   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0722 00:28:47.268918   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0722 00:28:47.269687   13232 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0722 00:28:47.269774   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0722 00:28:47.280064   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:28:47.299834   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0722 00:28:47.344490   13232 start.go:296] duration metric: took 4.9484694s for postStartSetup
	I0722 00:28:47.347933   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:49.550285   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:49.550366   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:49.550366   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:52.116017   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:52.116017   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:52.116017   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:28:52.119028   13232 start.go:128] duration metric: took 2m7.3301724s to createHost
	I0722 00:28:52.119028   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:54.284490   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:54.284490   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:54.285100   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:28:56.919072   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:28:56.919072   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:56.923853   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:28:56.924556   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:28:56.924556   13232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:28:57.060727   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721608137.085529417
	
	I0722 00:28:57.060727   13232 fix.go:216] guest clock: 1721608137.085529417
	I0722 00:28:57.060727   13232 fix.go:229] Guest: 2024-07-22 00:28:57.085529417 +0000 UTC Remote: 2024-07-22 00:28:52.1190285 +0000 UTC m=+133.047205201 (delta=4.966500917s)
	I0722 00:28:57.061261   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:28:59.241766   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:28:59.241766   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:28:59.242527   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:29:01.833461   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:29:01.833461   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:01.841057   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:29:01.841794   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.103 22 <nil> <nil>}
	I0722 00:29:01.841794   13232 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721608137
	I0722 00:29:01.990676   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jul 22 00:28:57 UTC 2024
	
	I0722 00:29:01.990676   13232 fix.go:236] clock set: Mon Jul 22 00:28:57 UTC 2024
	 (err=<nil>)
	I0722 00:29:01.990676   13232 start.go:83] releasing machines lock for "ha-474700", held for 2m17.2018378s
	I0722 00:29:01.991411   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:29:04.193615   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:29:04.193615   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:04.194682   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:29:06.738773   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:29:06.738773   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:06.743755   13232 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0722 00:29:06.743832   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:29:06.753934   13232 ssh_runner.go:195] Run: cat /version.json
	I0722 00:29:06.753934   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:29:09.081599   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:29:09.081816   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:09.081956   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:29:09.086527   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:29:09.086615   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:09.087109   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:29:11.976444   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:29:11.976444   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:11.977012   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:29:12.031940   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:29:12.031940   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:29:12.032462   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:29:12.084269   13232 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.3404465s)
	W0722 00:29:12.084269   13232 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0722 00:29:12.133393   13232 ssh_runner.go:235] Completed: cat /version.json: (5.3793907s)
	I0722 00:29:12.146809   13232 ssh_runner.go:195] Run: systemctl --version
	I0722 00:29:12.171652   13232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 00:29:12.182170   13232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:29:12.194405   13232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0722 00:29:12.203129   13232 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0722 00:29:12.203267   13232 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0722 00:29:12.228028   13232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:29:12.228107   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:29:12.228600   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:29:12.286530   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 00:29:12.320219   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 00:29:12.340249   13232 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 00:29:12.354389   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 00:29:12.387792   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:29:12.420997   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 00:29:12.453626   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:29:12.490792   13232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:29:12.531641   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 00:29:12.568035   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 00:29:12.606758   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 00:29:12.642021   13232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:29:12.673455   13232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:29:12.705292   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:12.913449   13232 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 00:29:12.948564   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:29:12.961136   13232 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 00:29:12.998900   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:29:13.033582   13232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:29:13.081223   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:29:13.119935   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:29:13.160434   13232 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 00:29:13.225229   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:29:13.255334   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:29:13.308531   13232 ssh_runner.go:195] Run: which cri-dockerd
	I0722 00:29:13.334391   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 00:29:13.352881   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 00:29:13.400058   13232 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 00:29:13.610798   13232 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 00:29:13.812961   13232 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 00:29:13.813217   13232 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 00:29:13.860903   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:14.077860   13232 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 00:29:16.741746   13232 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6637188s)
	I0722 00:29:16.753701   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 00:29:16.793574   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:29:16.832300   13232 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 00:29:17.052568   13232 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 00:29:17.257576   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:17.464695   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 00:29:17.508596   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:29:17.560811   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:17.752472   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 00:29:17.872085   13232 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 00:29:17.886983   13232 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 00:29:17.896708   13232 start.go:563] Will wait 60s for crictl version
	I0722 00:29:17.911396   13232 ssh_runner.go:195] Run: which crictl
	I0722 00:29:17.932497   13232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:29:17.987436   13232 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 00:29:17.997600   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:29:18.048331   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:29:18.106976   13232 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 00:29:18.106976   13232 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0722 00:29:18.111177   13232 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0722 00:29:18.111177   13232 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0722 00:29:18.111177   13232 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0722 00:29:18.111177   13232 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e8:0a:ec Flags:up|broadcast|multicast|running}
	I0722 00:29:18.113656   13232 ip.go:210] interface addr: fe80::cedd:59ec:4db2:d0bf/64
	I0722 00:29:18.113656   13232 ip.go:210] interface addr: 172.28.192.1/20
	I0722 00:29:18.127142   13232 ssh_runner.go:195] Run: grep 172.28.192.1	host.minikube.internal$ /etc/hosts
	I0722 00:29:18.134015   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:29:18.171733   13232 kubeadm.go:883] updating cluster {Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 00:29:18.171733   13232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 00:29:18.182418   13232 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 00:29:18.209545   13232 docker.go:685] Got preloaded images: 
	I0722 00:29:18.209646   13232 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0722 00:29:18.223151   13232 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 00:29:18.252303   13232 ssh_runner.go:195] Run: which lz4
	I0722 00:29:18.259956   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0722 00:29:18.269992   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 00:29:18.277247   13232 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 00:29:18.277339   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0722 00:29:20.396559   13232 docker.go:649] duration metric: took 2.1362343s to copy over tarball
	I0722 00:29:20.408651   13232 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 00:29:28.874742   13232 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4659838s)
	I0722 00:29:28.874742   13232 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 00:29:28.938472   13232 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 00:29:28.956602   13232 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0722 00:29:29.004598   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:29.226490   13232 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 00:29:32.700694   13232 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.4741592s)
	I0722 00:29:32.711067   13232 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 00:29:32.737505   13232 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0722 00:29:32.737505   13232 cache_images.go:84] Images are preloaded, skipping loading
	I0722 00:29:32.737641   13232 kubeadm.go:934] updating node { 172.28.196.103 8443 v1.30.3 docker true true} ...
	I0722 00:29:32.737846   13232 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-474700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.196.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:29:32.748350   13232 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0722 00:29:32.785700   13232 cni.go:84] Creating CNI manager for ""
	I0722 00:29:32.785700   13232 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 00:29:32.785700   13232 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 00:29:32.785700   13232 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.196.103 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-474700 NodeName:ha-474700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.196.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.196.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 00:29:32.785700   13232 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.196.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-474700"
	  kubeletExtraArgs:
	    node-ip: 172.28.196.103
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.196.103"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 00:29:32.785700   13232 kube-vip.go:115] generating kube-vip config ...
	I0722 00:29:32.798468   13232 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 00:29:32.824431   13232 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 00:29:32.824431   13232 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0722 00:29:32.838539   13232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:29:32.853043   13232 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 00:29:32.866381   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0722 00:29:32.883950   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0722 00:29:32.913256   13232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:29:32.944871   13232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0722 00:29:32.973650   13232 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0722 00:29:33.017291   13232 ssh_runner.go:195] Run: grep 172.28.207.254	control-plane.minikube.internal$ /etc/hosts
	I0722 00:29:33.026777   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:29:33.068588   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:29:33.271918   13232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:29:33.302647   13232 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700 for IP: 172.28.196.103
	I0722 00:29:33.302647   13232 certs.go:194] generating shared ca certs ...
	I0722 00:29:33.302647   13232 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.303606   13232 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0722 00:29:33.304024   13232 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0722 00:29:33.304191   13232 certs.go:256] generating profile certs ...
	I0722 00:29:33.304822   13232 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.key
	I0722 00:29:33.304998   13232 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.crt with IP's: []
	I0722 00:29:33.484309   13232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.crt ...
	I0722 00:29:33.485347   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.crt: {Name:mk6ec30550eeb2a591a614a0b36b22c6fae9522e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.486633   13232 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.key ...
	I0722 00:29:33.486633   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.key: {Name:mk3f09d2f0d20bdf458943336f4c23c48dfcdc83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.487579   13232 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.8458c8ba
	I0722 00:29:33.487579   13232 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.8458c8ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.196.103 172.28.207.254]
	I0722 00:29:33.646137   13232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.8458c8ba ...
	I0722 00:29:33.646137   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.8458c8ba: {Name:mkc0f1f56dd689a73b6dc1cf40052e2e4287fef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.647387   13232 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.8458c8ba ...
	I0722 00:29:33.647387   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.8458c8ba: {Name:mkb8fab9785e7362a737eb82dc6d8bb058fa3c57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.649325   13232 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.8458c8ba -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt
	I0722 00:29:33.661757   13232 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.8458c8ba -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key
	I0722 00:29:33.664425   13232 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key
	I0722 00:29:33.664659   13232 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt with IP's: []
	I0722 00:29:33.843913   13232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt ...
	I0722 00:29:33.843913   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt: {Name:mk1eb944a45c1547219466417810d4bdfb6e46f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.845083   13232 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key ...
	I0722 00:29:33.846109   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key: {Name:mkb018250d4b82cd4a539b76f9426ffa11a19feb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:29:33.847385   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 00:29:33.847584   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0722 00:29:33.847795   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 00:29:33.847980   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 00:29:33.848190   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 00:29:33.848333   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 00:29:33.848522   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 00:29:33.858746   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 00:29:33.859258   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem (1338 bytes)
	W0722 00:29:33.859893   13232 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100_empty.pem, impossibly tiny 0 bytes
	I0722 00:29:33.859893   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0722 00:29:33.860296   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0722 00:29:33.860548   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0722 00:29:33.860860   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0722 00:29:33.861153   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem (1708 bytes)
	I0722 00:29:33.861704   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem -> /usr/share/ca-certificates/5100.pem
	I0722 00:29:33.861881   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /usr/share/ca-certificates/51002.pem
	I0722 00:29:33.862016   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:29:33.863431   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:29:33.910143   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:29:33.957775   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:29:34.004195   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 00:29:34.053748   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 00:29:34.097753   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 00:29:34.143517   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:29:34.188521   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:29:34.237403   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem --> /usr/share/ca-certificates/5100.pem (1338 bytes)
	I0722 00:29:34.283066   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /usr/share/ca-certificates/51002.pem (1708 bytes)
	I0722 00:29:34.327697   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:29:34.377917   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 00:29:34.421282   13232 ssh_runner.go:195] Run: openssl version
	I0722 00:29:34.442067   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5100.pem && ln -fs /usr/share/ca-certificates/5100.pem /etc/ssl/certs/5100.pem"
	I0722 00:29:34.473642   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5100.pem
	I0722 00:29:34.480797   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:45 /usr/share/ca-certificates/5100.pem
	I0722 00:29:34.493209   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5100.pem
	I0722 00:29:34.514239   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5100.pem /etc/ssl/certs/51391683.0"
	I0722 00:29:34.547267   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51002.pem && ln -fs /usr/share/ca-certificates/51002.pem /etc/ssl/certs/51002.pem"
	I0722 00:29:34.583885   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51002.pem
	I0722 00:29:34.592100   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:45 /usr/share/ca-certificates/51002.pem
	I0722 00:29:34.604316   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51002.pem
	I0722 00:29:34.624306   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51002.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:29:34.655760   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:29:34.686734   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:29:34.693826   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:29:34.705111   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:29:34.725549   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:29:34.755132   13232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:29:34.765949   13232 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 00:29:34.766304   13232 kubeadm.go:392] StartCluster: {Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:29:34.775036   13232 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 00:29:34.812153   13232 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 00:29:34.851674   13232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 00:29:34.883342   13232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 00:29:34.898716   13232 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 00:29:34.898716   13232 kubeadm.go:157] found existing configuration files:
	
	I0722 00:29:34.909893   13232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 00:29:34.926930   13232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 00:29:34.940064   13232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 00:29:34.973371   13232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 00:29:34.990117   13232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 00:29:35.001439   13232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 00:29:35.028383   13232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 00:29:35.045384   13232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 00:29:35.056404   13232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 00:29:35.085496   13232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 00:29:35.101644   13232 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 00:29:35.112898   13232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 00:29:35.129791   13232 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 00:29:35.574823   13232 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 00:29:49.968026   13232 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 00:29:49.968026   13232 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 00:29:49.968026   13232 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 00:29:49.968026   13232 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 00:29:49.968026   13232 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 00:29:49.968026   13232 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 00:29:49.972935   13232 out.go:204]   - Generating certificates and keys ...
	I0722 00:29:49.972935   13232 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 00:29:49.972935   13232 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 00:29:49.973484   13232 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 00:29:49.973696   13232 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 00:29:49.973968   13232 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 00:29:49.974163   13232 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 00:29:49.974316   13232 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 00:29:49.974855   13232 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-474700 localhost] and IPs [172.28.196.103 127.0.0.1 ::1]
	I0722 00:29:49.975054   13232 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 00:29:49.975337   13232 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-474700 localhost] and IPs [172.28.196.103 127.0.0.1 ::1]
	I0722 00:29:49.975552   13232 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 00:29:49.975552   13232 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 00:29:49.975552   13232 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 00:29:49.975552   13232 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 00:29:49.976399   13232 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 00:29:49.976606   13232 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 00:29:49.976785   13232 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 00:29:49.977132   13232 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 00:29:49.977292   13232 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 00:29:49.977446   13232 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 00:29:49.977446   13232 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 00:29:49.979910   13232 out.go:204]   - Booting up control plane ...
	I0722 00:29:49.979910   13232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 00:29:49.981118   13232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 00:29:49.981118   13232 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 00:29:49.981118   13232 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 00:29:49.981664   13232 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 00:29:49.981841   13232 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 00:29:49.981934   13232 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 00:29:49.981934   13232 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 00:29:49.981934   13232 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.272913ms
	I0722 00:29:49.982620   13232 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 00:29:49.982832   13232 kubeadm.go:310] [api-check] The API server is healthy after 9.147227816s
	I0722 00:29:49.983112   13232 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 00:29:49.983369   13232 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 00:29:49.983369   13232 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 00:29:49.983776   13232 kubeadm.go:310] [mark-control-plane] Marking the node ha-474700 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 00:29:49.983776   13232 kubeadm.go:310] [bootstrap-token] Using token: 1axj62.jwf7mo13iodfl6h7
	I0722 00:29:49.988738   13232 out.go:204]   - Configuring RBAC rules ...
	I0722 00:29:49.988738   13232 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 00:29:49.989509   13232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 00:29:49.989509   13232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 00:29:49.989509   13232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 00:29:49.989509   13232 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 00:29:49.989509   13232 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 00:29:49.990697   13232 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 00:29:49.990697   13232 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 00:29:49.990697   13232 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 00:29:49.990697   13232 kubeadm.go:310] 
	I0722 00:29:49.990697   13232 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 00:29:49.990697   13232 kubeadm.go:310] 
	I0722 00:29:49.990697   13232 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 00:29:49.990697   13232 kubeadm.go:310] 
	I0722 00:29:49.991313   13232 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 00:29:49.991394   13232 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 00:29:49.991607   13232 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 00:29:49.991607   13232 kubeadm.go:310] 
	I0722 00:29:49.991750   13232 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 00:29:49.991750   13232 kubeadm.go:310] 
	I0722 00:29:49.991750   13232 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 00:29:49.991750   13232 kubeadm.go:310] 
	I0722 00:29:49.991750   13232 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 00:29:49.991750   13232 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 00:29:49.992280   13232 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 00:29:49.992280   13232 kubeadm.go:310] 
	I0722 00:29:49.992460   13232 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 00:29:49.992680   13232 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 00:29:49.992680   13232 kubeadm.go:310] 
	I0722 00:29:49.992680   13232 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1axj62.jwf7mo13iodfl6h7 \
	I0722 00:29:49.992680   13232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b \
	I0722 00:29:49.993243   13232 kubeadm.go:310] 	--control-plane 
	I0722 00:29:49.993243   13232 kubeadm.go:310] 
	I0722 00:29:49.993243   13232 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 00:29:49.993243   13232 kubeadm.go:310] 
	I0722 00:29:49.993243   13232 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1axj62.jwf7mo13iodfl6h7 \
	I0722 00:29:49.993809   13232 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b 
	I0722 00:29:49.993809   13232 cni.go:84] Creating CNI manager for ""
	I0722 00:29:49.993809   13232 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 00:29:49.995816   13232 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0722 00:29:50.010151   13232 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0722 00:29:50.018846   13232 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0722 00:29:50.018846   13232 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0722 00:29:50.067209   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0722 00:29:50.745732   13232 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 00:29:50.758776   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:50.758776   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-474700 minikube.k8s.io/updated_at=2024_07_22T00_29_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-474700 minikube.k8s.io/primary=true
	I0722 00:29:50.778945   13232 ops.go:34] apiserver oom_adj: -16
	I0722 00:29:50.976668   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:51.483758   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:51.988268   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:52.488134   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:52.984858   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:53.485672   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:53.989050   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:54.494583   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:54.978642   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:55.492111   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:55.993884   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:56.481797   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:56.981947   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:57.494095   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:57.988849   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:58.481027   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:58.987955   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:59.488870   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:29:59.978191   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:00.494860   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:00.982173   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:01.488311   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:01.985328   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:02.479630   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 00:30:02.667045   13232 kubeadm.go:1113] duration metric: took 11.9211221s to wait for elevateKubeSystemPrivileges
	I0722 00:30:02.667226   13232 kubeadm.go:394] duration metric: took 27.9005673s to StartCluster
	I0722 00:30:02.667324   13232 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:02.667604   13232 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:30:02.668982   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:30:02.670547   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0722 00:30:02.670788   13232 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:30:02.670851   13232 start.go:241] waiting for startup goroutines ...
	I0722 00:30:02.670788   13232 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 00:30:02.670948   13232 addons.go:69] Setting storage-provisioner=true in profile "ha-474700"
	I0722 00:30:02.670948   13232 addons.go:69] Setting default-storageclass=true in profile "ha-474700"
	I0722 00:30:02.670948   13232 addons.go:234] Setting addon storage-provisioner=true in "ha-474700"
	I0722 00:30:02.670948   13232 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-474700"
	I0722 00:30:02.670948   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:30:02.670948   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:30:02.671903   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:30:02.672411   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:30:02.918795   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0722 00:30:03.330484   13232 start.go:971] {"host.minikube.internal": 172.28.192.1} host record injected into CoreDNS's ConfigMap
	I0722 00:30:05.083688   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:05.083688   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:05.087304   13232 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 00:30:05.088066   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:05.088066   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:05.089756   13232 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:30:05.090310   13232 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:30:05.090350   13232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 00:30:05.090429   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:30:05.090636   13232 kapi.go:59] client config for ha-474700: &rest.Config{Host:"https://172.28.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2085e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 00:30:05.091972   13232 cert_rotation.go:137] Starting client certificate rotation controller
	I0722 00:30:05.092638   13232 addons.go:234] Setting addon default-storageclass=true in "ha-474700"
	I0722 00:30:05.092680   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:30:05.093401   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:30:07.652322   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:07.652380   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:07.652414   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:30:07.652414   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:07.652414   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:07.652414   13232 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 00:30:07.652414   13232 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 00:30:07.652414   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:30:10.058501   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:10.058501   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:10.058828   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:30:10.524513   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:30:10.525617   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:10.526104   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:30:10.664786   13232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 00:30:12.808109   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:30:12.808167   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:12.808513   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:30:12.949995   13232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 00:30:13.117236   13232 round_trippers.go:463] GET https://172.28.207.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0722 00:30:13.117320   13232 round_trippers.go:469] Request Headers:
	I0722 00:30:13.117640   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:30:13.117640   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:30:13.130268   13232 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0722 00:30:13.132255   13232 round_trippers.go:463] PUT https://172.28.207.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0722 00:30:13.132255   13232 round_trippers.go:469] Request Headers:
	I0722 00:30:13.132336   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:30:13.132336   13232 round_trippers.go:473]     Content-Type: application/json
	I0722 00:30:13.132336   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:30:13.136234   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:30:13.141440   13232 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0722 00:30:13.145766   13232 addons.go:510] duration metric: took 10.4748451s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0722 00:30:13.145766   13232 start.go:246] waiting for cluster config update ...
	I0722 00:30:13.145766   13232 start.go:255] writing updated cluster config ...
	I0722 00:30:13.148699   13232 out.go:177] 
	I0722 00:30:13.165352   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:30:13.165352   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:30:13.172926   13232 out.go:177] * Starting "ha-474700-m02" control-plane node in "ha-474700" cluster
	I0722 00:30:13.176650   13232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 00:30:13.176650   13232 cache.go:56] Caching tarball of preloaded images
	I0722 00:30:13.177457   13232 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 00:30:13.177457   13232 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 00:30:13.177457   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:30:13.185588   13232 start.go:360] acquireMachinesLock for ha-474700-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:30:13.186591   13232 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-474700-m02"
	I0722 00:30:13.186591   13232 start.go:93] Provisioning new machine with config: &{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:30:13.186591   13232 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0722 00:30:13.189622   13232 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 00:30:13.189622   13232 start.go:159] libmachine.API.Create for "ha-474700" (driver="hyperv")
	I0722 00:30:13.189622   13232 client.go:168] LocalClient.Create starting
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:30:13.190607   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:30:13.190607   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0722 00:30:15.141301   13232 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0722 00:30:15.142350   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:15.142350   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0722 00:30:17.044647   13232 main.go:141] libmachine: [stdout =====>] : False
	
	I0722 00:30:17.044791   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:17.044791   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:30:18.589777   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:30:18.590014   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:18.590014   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:30:22.258033   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:30:22.258596   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:22.260967   13232 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 00:30:22.748306   13232 main.go:141] libmachine: Creating SSH key...
	I0722 00:30:22.841237   13232 main.go:141] libmachine: Creating VM...
	I0722 00:30:22.841237   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:30:25.790856   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:30:25.790856   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:25.790856   13232 main.go:141] libmachine: Using switch "Default Switch"
	I0722 00:30:25.791269   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:30:27.596087   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:30:27.596755   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:27.596755   13232 main.go:141] libmachine: Creating VHD
	I0722 00:30:27.596755   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0722 00:30:31.585878   13232 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B8070AB8-BC9B-4CF6-8C0B-12CB14282C2C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0722 00:30:31.585878   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:31.585878   13232 main.go:141] libmachine: Writing magic tar header
	I0722 00:30:31.585878   13232 main.go:141] libmachine: Writing SSH key tar header
	I0722 00:30:31.597394   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0722 00:30:34.863492   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:34.864442   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:34.864528   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\disk.vhd' -SizeBytes 20000MB
	I0722 00:30:37.489805   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:37.489805   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:37.489805   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-474700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0722 00:30:41.252887   13232 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-474700-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0722 00:30:41.252968   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:41.252968   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-474700-m02 -DynamicMemoryEnabled $false
	I0722 00:30:43.633211   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:43.633211   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:43.633211   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-474700-m02 -Count 2
	I0722 00:30:45.902186   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:45.902186   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:45.902297   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-474700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\boot2docker.iso'
	I0722 00:30:48.613280   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:48.613280   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:48.613280   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-474700-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\disk.vhd'
	I0722 00:30:51.337962   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:51.337962   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:51.337962   13232 main.go:141] libmachine: Starting VM...
	I0722 00:30:51.338622   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-474700-m02
	I0722 00:30:54.526901   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:54.527080   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:54.527080   13232 main.go:141] libmachine: Waiting for host to start...
	I0722 00:30:54.527080   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:30:56.954495   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:30:56.954495   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:30:56.954495   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:30:59.605427   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:30:59.605427   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:00.611425   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:02.956736   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:02.957138   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:02.957201   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:05.566151   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:31:05.566714   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:06.574724   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:08.898856   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:08.898856   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:08.898856   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:11.552542   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:31:11.552934   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:12.556467   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:14.816279   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:14.816740   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:14.816931   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:17.431936   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:31:17.432199   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:18.435950   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:20.790432   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:20.790517   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:20.790517   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:23.401609   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:23.401932   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:23.402004   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:25.691254   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:25.692007   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:25.692270   13232 machine.go:94] provisionDockerMachine start ...
	I0722 00:31:25.692270   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:28.102838   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:28.102838   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:28.102954   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:30.832390   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:30.832632   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:30.837401   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:31:30.848587   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:31:30.848587   13232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:31:30.987963   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:31:30.987963   13232 buildroot.go:166] provisioning hostname "ha-474700-m02"
	I0722 00:31:30.988079   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:33.210564   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:33.210650   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:33.210650   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:35.779839   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:35.779839   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:35.784421   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:31:35.785346   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:31:35.785346   13232 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-474700-m02 && echo "ha-474700-m02" | sudo tee /etc/hostname
	I0722 00:31:35.945908   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-474700-m02
	
	I0722 00:31:35.946066   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:38.129204   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:38.129204   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:38.129204   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:40.740804   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:40.741541   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:40.747901   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:31:40.748526   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:31:40.748526   13232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-474700-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-474700-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-474700-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:31:40.897291   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:31:40.897291   13232 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0722 00:31:40.897291   13232 buildroot.go:174] setting up certificates
	I0722 00:31:40.897291   13232 provision.go:84] configureAuth start
	I0722 00:31:40.897291   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:43.102290   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:43.102290   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:43.102290   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:45.728549   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:45.728549   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:45.728549   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:47.956379   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:47.956379   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:47.957245   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:50.602201   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:50.602935   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:50.603231   13232 provision.go:143] copyHostCerts
	I0722 00:31:50.603231   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0722 00:31:50.603762   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0722 00:31:50.603762   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0722 00:31:50.604212   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0722 00:31:50.605565   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0722 00:31:50.605955   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0722 00:31:50.605982   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0722 00:31:50.605982   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0722 00:31:50.606936   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0722 00:31:50.607628   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0722 00:31:50.607628   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0722 00:31:50.607708   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0722 00:31:50.608894   13232 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-474700-m02 san=[127.0.0.1 172.28.200.182 ha-474700-m02 localhost minikube]
	I0722 00:31:50.864431   13232 provision.go:177] copyRemoteCerts
	I0722 00:31:50.875257   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:31:50.876204   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:53.104195   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:53.105250   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:53.105250   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:31:55.745667   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:31:55.746179   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:55.746179   13232 sshutil.go:53] new ssh client: &{IP:172.28.200.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\id_rsa Username:docker}
	I0722 00:31:55.859007   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9836871s)
	I0722 00:31:55.859113   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0722 00:31:55.859350   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 00:31:55.908563   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0722 00:31:55.909043   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 00:31:55.962811   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0722 00:31:55.963010   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 00:31:56.011405   13232 provision.go:87] duration metric: took 15.1139223s to configureAuth
	I0722 00:31:56.011469   13232 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:31:56.012051   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:31:56.012051   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:31:58.284847   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:31:58.284914   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:31:58.284971   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:00.890390   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:00.890829   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:00.896407   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:32:00.897170   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:32:00.897170   13232 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 00:32:01.033921   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 00:32:01.033921   13232 buildroot.go:70] root file system type: tmpfs
	I0722 00:32:01.034305   13232 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 00:32:01.034305   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:03.227013   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:03.227800   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:03.227879   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:05.882398   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:05.883394   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:05.889303   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:32:05.889522   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:32:05.889522   13232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.196.103"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 00:32:06.067572   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.196.103
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 00:32:06.067572   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:08.274785   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:08.275034   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:08.275109   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:10.966387   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:10.966387   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:10.972755   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:32:10.973419   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:32:10.973419   13232 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 00:32:13.240949   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 00:32:13.241015   13232 machine.go:97] duration metric: took 47.5481423s to provisionDockerMachine
	I0722 00:32:13.241015   13232 client.go:171] duration metric: took 2m0.0498695s to LocalClient.Create
	I0722 00:32:13.241076   13232 start.go:167] duration metric: took 2m0.0499311s to libmachine.API.Create "ha-474700"
	I0722 00:32:13.241076   13232 start.go:293] postStartSetup for "ha-474700-m02" (driver="hyperv")
	I0722 00:32:13.241157   13232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:32:13.252446   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:32:13.252446   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:15.444356   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:15.444356   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:15.445350   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:18.058772   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:18.058772   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:18.058772   13232 sshutil.go:53] new ssh client: &{IP:172.28.200.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\id_rsa Username:docker}
	I0722 00:32:18.169839   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9172163s)
	I0722 00:32:18.181671   13232 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:32:18.187528   13232 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:32:18.187528   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0722 00:32:18.188097   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0722 00:32:18.188969   13232 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0722 00:32:18.188969   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0722 00:32:18.200823   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:32:18.218163   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0722 00:32:18.263010   13232 start.go:296] duration metric: took 5.0218706s for postStartSetup
	I0722 00:32:18.266077   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:20.457590   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:20.457590   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:20.457590   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:23.084801   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:23.085075   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:23.085302   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:32:23.087998   13232 start.go:128] duration metric: took 2m9.8997595s to createHost
	I0722 00:32:23.087998   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:25.357336   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:25.357336   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:25.357336   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:28.064721   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:28.064721   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:28.076076   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:32:28.076661   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:32:28.076661   13232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:32:28.221654   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721608348.229095581
	
	I0722 00:32:28.221654   13232 fix.go:216] guest clock: 1721608348.229095581
	I0722 00:32:28.221654   13232 fix.go:229] Guest: 2024-07-22 00:32:28.229095581 +0000 UTC Remote: 2024-07-22 00:32:23.0879982 +0000 UTC m=+344.013497901 (delta=5.141097381s)
	I0722 00:32:28.222199   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:30.472938   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:30.472938   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:30.473131   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:33.096407   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:33.096407   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:33.102650   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:32:33.103227   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.200.182 22 <nil> <nil>}
	I0722 00:32:33.103227   13232 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721608348
	I0722 00:32:33.261147   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jul 22 00:32:28 UTC 2024
	
	I0722 00:32:33.261211   13232 fix.go:236] clock set: Mon Jul 22 00:32:28 UTC 2024
	 (err=<nil>)
	I0722 00:32:33.261211   13232 start.go:83] releasing machines lock for "ha-474700-m02", held for 2m20.0728446s
	I0722 00:32:33.261432   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:35.512963   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:35.512963   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:35.513684   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:38.166784   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:38.167551   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:38.170885   13232 out.go:177] * Found network options:
	I0722 00:32:38.174111   13232 out.go:177]   - NO_PROXY=172.28.196.103
	W0722 00:32:38.177988   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 00:32:38.180443   13232 out.go:177]   - NO_PROXY=172.28.196.103
	W0722 00:32:38.183470   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 00:32:38.184555   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 00:32:38.186565   13232 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0722 00:32:38.186565   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:38.196514   13232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 00:32:38.196514   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m02 ).state
	I0722 00:32:40.502778   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:40.502778   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:40.502778   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:40.502778   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:40.502778   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:40.503845   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:43.293995   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:43.294147   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:43.294403   13232 sshutil.go:53] new ssh client: &{IP:172.28.200.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\id_rsa Username:docker}
	I0722 00:32:43.326711   13232 main.go:141] libmachine: [stdout =====>] : 172.28.200.182
	
	I0722 00:32:43.326711   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:43.327705   13232 sshutil.go:53] new ssh client: &{IP:172.28.200.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m02\id_rsa Username:docker}
	I0722 00:32:43.402876   13232 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2062964s)
	W0722 00:32:43.402876   13232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:32:43.415421   13232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:32:43.420503   13232 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2338723s)
	W0722 00:32:43.420503   13232 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0722 00:32:43.447831   13232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:32:43.447831   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:32:43.447831   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:32:43.495734   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 00:32:43.528176   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0722 00:32:43.538498   13232 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0722 00:32:43.538498   13232 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0722 00:32:43.553235   13232 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 00:32:43.565792   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 00:32:43.598126   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:32:43.628988   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 00:32:43.661088   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:32:43.693494   13232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:32:43.727743   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 00:32:43.759158   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 00:32:43.789520   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 00:32:43.821489   13232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:32:43.849725   13232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:32:43.881804   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:32:44.094365   13232 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 00:32:44.127015   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:32:44.138426   13232 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 00:32:44.173331   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:32:44.218434   13232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:32:44.262481   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:32:44.307163   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:32:44.345094   13232 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 00:32:44.404693   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:32:44.428982   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:32:44.474901   13232 ssh_runner.go:195] Run: which cri-dockerd
	I0722 00:32:44.492978   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 00:32:44.512026   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 00:32:44.556017   13232 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 00:32:44.748279   13232 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 00:32:44.940396   13232 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 00:32:44.940589   13232 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 00:32:44.988131   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:32:45.191921   13232 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 00:32:47.834316   13232 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6423618s)
	I0722 00:32:47.845789   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 00:32:47.880608   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:32:47.915622   13232 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 00:32:48.128204   13232 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 00:32:48.328212   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:32:48.545087   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 00:32:48.585516   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:32:48.618785   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:32:48.814005   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 00:32:48.920286   13232 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 00:32:48.932135   13232 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 00:32:48.941804   13232 start.go:563] Will wait 60s for crictl version
	I0722 00:32:48.952104   13232 ssh_runner.go:195] Run: which crictl
	I0722 00:32:48.970352   13232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:32:49.024710   13232 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 00:32:49.031618   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:32:49.081617   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:32:49.118513   13232 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 00:32:49.121988   13232 out.go:177]   - env NO_PROXY=172.28.196.103
	I0722 00:32:49.127060   13232 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0722 00:32:49.130991   13232 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0722 00:32:49.130991   13232 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0722 00:32:49.130991   13232 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0722 00:32:49.130991   13232 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e8:0a:ec Flags:up|broadcast|multicast|running}
	I0722 00:32:49.133996   13232 ip.go:210] interface addr: fe80::cedd:59ec:4db2:d0bf/64
	I0722 00:32:49.133996   13232 ip.go:210] interface addr: 172.28.192.1/20
	I0722 00:32:49.144994   13232 ssh_runner.go:195] Run: grep 172.28.192.1	host.minikube.internal$ /etc/hosts
	I0722 00:32:49.156106   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:32:49.177454   13232 mustload.go:65] Loading cluster: ha-474700
	I0722 00:32:49.178115   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:32:49.178115   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:32:51.380908   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:51.381584   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:51.381664   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:32:51.382519   13232 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700 for IP: 172.28.200.182
	I0722 00:32:51.382519   13232 certs.go:194] generating shared ca certs ...
	I0722 00:32:51.382519   13232 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:32:51.383760   13232 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0722 00:32:51.384340   13232 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0722 00:32:51.384655   13232 certs.go:256] generating profile certs ...
	I0722 00:32:51.385280   13232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.key
	I0722 00:32:51.385410   13232 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.c396fe80
	I0722 00:32:51.385623   13232 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.c396fe80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.196.103 172.28.200.182 172.28.207.254]
	I0722 00:32:51.553909   13232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.c396fe80 ...
	I0722 00:32:51.553909   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.c396fe80: {Name:mka6070aeb7f4cde3be31aaa596d95d9c034e587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:32:51.555670   13232 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.c396fe80 ...
	I0722 00:32:51.555670   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.c396fe80: {Name:mk6afa224509e2e4545fafb434a2e97f50f307ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:32:51.556729   13232 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.c396fe80 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt
	I0722 00:32:51.569613   13232 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.c396fe80 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key
	I0722 00:32:51.571754   13232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key
	I0722 00:32:51.571754   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 00:32:51.572089   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0722 00:32:51.572231   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 00:32:51.572423   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 00:32:51.572607   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 00:32:51.572607   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 00:32:51.572832   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 00:32:51.572832   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 00:32:51.573434   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem (1338 bytes)
	W0722 00:32:51.573815   13232 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100_empty.pem, impossibly tiny 0 bytes
	I0722 00:32:51.573815   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0722 00:32:51.574153   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0722 00:32:51.574533   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0722 00:32:51.574765   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0722 00:32:51.575036   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem (1708 bytes)
	I0722 00:32:51.575554   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /usr/share/ca-certificates/51002.pem
	I0722 00:32:51.575774   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:32:51.575943   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem -> /usr/share/ca-certificates/5100.pem
	I0722 00:32:51.576127   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:32:53.774921   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:32:53.774958   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:53.775105   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:32:56.397483   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:32:56.397483   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:32:56.397782   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:32:56.506834   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0722 00:32:56.515655   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0722 00:32:56.550345   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0722 00:32:56.557248   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0722 00:32:56.588373   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0722 00:32:56.594475   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0722 00:32:56.626291   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0722 00:32:56.633194   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0722 00:32:56.663279   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0722 00:32:56.669787   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0722 00:32:56.699138   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0722 00:32:56.705828   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0722 00:32:56.729850   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:32:56.776166   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:32:56.818868   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:32:56.870982   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 00:32:56.922233   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0722 00:32:56.971389   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:32:57.020551   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:32:57.081073   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:32:57.139509   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /usr/share/ca-certificates/51002.pem (1708 bytes)
	I0722 00:32:57.188037   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:32:57.241859   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem --> /usr/share/ca-certificates/5100.pem (1338 bytes)
	I0722 00:32:57.287990   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0722 00:32:57.319658   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0722 00:32:57.351886   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0722 00:32:57.388110   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0722 00:32:57.419666   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0722 00:32:57.451263   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0722 00:32:57.485749   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0722 00:32:57.544292   13232 ssh_runner.go:195] Run: openssl version
	I0722 00:32:57.565401   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51002.pem && ln -fs /usr/share/ca-certificates/51002.pem /etc/ssl/certs/51002.pem"
	I0722 00:32:57.601097   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51002.pem
	I0722 00:32:57.607937   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:45 /usr/share/ca-certificates/51002.pem
	I0722 00:32:57.621507   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51002.pem
	I0722 00:32:57.642603   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51002.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:32:57.672729   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:32:57.704656   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:32:57.713016   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:32:57.725638   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:32:57.747604   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:32:57.779563   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5100.pem && ln -fs /usr/share/ca-certificates/5100.pem /etc/ssl/certs/5100.pem"
	I0722 00:32:57.810050   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5100.pem
	I0722 00:32:57.816659   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:45 /usr/share/ca-certificates/5100.pem
	I0722 00:32:57.828122   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5100.pem
	I0722 00:32:57.847891   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5100.pem /etc/ssl/certs/51391683.0"
	I0722 00:32:57.877407   13232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:32:57.884040   13232 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 00:32:57.884040   13232 kubeadm.go:934] updating node {m02 172.28.200.182 8443 v1.30.3 docker true true} ...
	I0722 00:32:57.884922   13232 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-474700-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.200.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:32:57.884922   13232 kube-vip.go:115] generating kube-vip config ...
	I0722 00:32:57.898718   13232 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 00:32:57.925136   13232 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 00:32:57.925270   13232 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 00:32:57.937925   13232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:32:57.954474   13232 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0722 00:32:57.967758   13232 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0722 00:32:57.992420   13232 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl
	I0722 00:32:57.993221   13232 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm
	I0722 00:32:57.993221   13232 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet
	I0722 00:32:59.191230   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 00:32:59.203913   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 00:32:59.211674   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0722 00:32:59.211674   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0722 00:33:00.424513   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 00:33:00.435494   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 00:33:00.443643   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0722 00:33:00.443643   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0722 00:33:02.102341   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:33:02.129355   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 00:33:02.141937   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 00:33:02.149897   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0722 00:33:02.150120   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0722 00:33:02.725260   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0722 00:33:02.745718   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0722 00:33:02.776743   13232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:33:02.807290   13232 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0722 00:33:02.851102   13232 ssh_runner.go:195] Run: grep 172.28.207.254	control-plane.minikube.internal$ /etc/hosts
	I0722 00:33:02.857981   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:33:02.892335   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:33:03.102108   13232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:33:03.130995   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:33:03.131830   13232 start.go:317] joinCluster: &{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.200.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:33:03.132070   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0722 00:33:03.132218   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:33:05.318159   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:33:05.318159   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:33:05.318159   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:33:07.951907   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:33:07.951907   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:33:07.952173   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:33:08.189099   13232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0569656s)
	I0722 00:33:08.189099   13232 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.200.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:33:08.189099   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mt5mtd.9u04lu9vp7f5a7d3 --discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-474700-m02 --control-plane --apiserver-advertise-address=172.28.200.182 --apiserver-bind-port=8443"
	I0722 00:33:53.619376   13232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mt5mtd.9u04lu9vp7f5a7d3 --discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-474700-m02 --control-plane --apiserver-advertise-address=172.28.200.182 --apiserver-bind-port=8443": (45.429709s)
	I0722 00:33:53.620353   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0722 00:33:54.442608   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-474700-m02 minikube.k8s.io/updated_at=2024_07_22T00_33_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-474700 minikube.k8s.io/primary=false
	I0722 00:33:55.120210   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-474700-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0722 00:33:55.313557   13232 start.go:319] duration metric: took 52.1810745s to joinCluster
	I0722 00:33:55.313901   13232 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.28.200.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:33:55.314477   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:33:55.317644   13232 out.go:177] * Verifying Kubernetes components...
	I0722 00:33:55.332050   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:33:55.745405   13232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:33:55.773616   13232 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:33:55.774436   13232 kapi.go:59] client config for ha-474700: &rest.Config{Host:"https://172.28.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2085e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0722 00:33:55.774611   13232 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.207.254:8443 with https://172.28.196.103:8443
	I0722 00:33:55.775721   13232 node_ready.go:35] waiting up to 6m0s for node "ha-474700-m02" to be "Ready" ...
	I0722 00:33:55.775959   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:55.776014   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:55.776014   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:55.776061   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:55.798288   13232 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0722 00:33:56.289761   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:56.289848   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:56.289848   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:56.289934   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:56.297160   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:33:56.783141   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:56.783440   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:56.783440   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:56.783440   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:56.788245   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:33:57.278747   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:57.278809   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:57.278809   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:57.278809   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:57.284244   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:33:57.787454   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:57.787752   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:57.787752   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:57.787752   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:57.794077   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:33:57.794077   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:33:58.279829   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:58.279896   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:58.279896   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:58.279896   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:58.290506   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:33:58.786134   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:58.786134   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:58.786134   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:58.786134   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:58.793152   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:33:59.279723   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:59.279947   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:59.279947   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:59.279947   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:59.299564   13232 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0722 00:33:59.790738   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:33:59.790830   13232 round_trippers.go:469] Request Headers:
	I0722 00:33:59.790830   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:33:59.790887   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:33:59.796481   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:33:59.797645   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:00.281101   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:00.281101   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:00.281101   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:00.281101   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:00.286117   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:00.789417   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:00.789584   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:00.789736   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:00.789736   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:00.795398   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:01.278707   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:01.278707   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:01.278707   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:01.278707   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:01.285111   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:01.786625   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:01.786625   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:01.786718   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:01.786718   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:01.791430   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:02.281722   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:02.281808   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:02.281808   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:02.281808   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:02.289563   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:34:02.291083   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:02.784781   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:02.785068   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:02.785068   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:02.785068   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:02.950295   13232 round_trippers.go:574] Response Status: 200 OK in 165 milliseconds
	I0722 00:34:03.286504   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:03.286504   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:03.286504   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:03.286504   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:03.293090   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:03.785806   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:03.786111   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:03.786156   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:03.786156   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:03.795007   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:34:04.288523   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:04.288523   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:04.288838   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:04.288838   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:04.298809   13232 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 00:34:04.299812   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:04.790884   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:04.790884   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:04.791088   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:04.791088   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:04.796111   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:05.291764   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:05.291764   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:05.291764   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:05.291764   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:05.297370   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:05.776369   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:05.776369   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:05.776369   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:05.776369   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:05.780870   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:06.278203   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:06.278203   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:06.278327   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:06.278327   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:06.282908   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:06.777527   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:06.777527   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:06.777527   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:06.777527   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:06.782949   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:06.784509   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:07.286772   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:07.286995   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:07.286995   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:07.286995   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:07.295390   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:34:07.782195   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:07.782195   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:07.782195   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:07.782428   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:07.786186   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:34:08.290855   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:08.290855   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:08.290998   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:08.290998   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:08.297084   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:08.783060   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:08.783299   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:08.783299   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:08.783299   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:08.790659   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:34:08.792026   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:09.289423   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:09.289695   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:09.289695   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:09.289695   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:09.294328   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:09.780714   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:09.780778   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:09.780778   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:09.780778   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:09.786758   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:10.286800   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:10.286800   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:10.286800   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:10.286800   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:10.293409   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:10.790781   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:10.790863   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:10.790863   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:10.790863   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:10.796499   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:10.797422   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:11.290558   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:11.290558   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:11.290558   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:11.290558   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:11.297195   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:11.777203   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:11.777203   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:11.777203   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:11.777203   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:11.781597   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:12.291272   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:12.291272   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:12.291272   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:12.291272   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:12.296882   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:12.777591   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:12.777689   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:12.777689   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:12.777689   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:12.782967   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:13.282748   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:13.282748   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:13.282748   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:13.282748   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:13.286787   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:13.288961   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:13.787719   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:13.787719   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:13.787719   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:13.787719   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:13.793219   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:14.287485   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:14.287485   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:14.287485   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:14.287485   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:14.294073   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:14.785598   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:14.785863   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:14.785863   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:14.785863   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:14.790197   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:15.288893   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:15.288893   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:15.288893   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:15.288893   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:15.293663   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:15.295113   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:15.787071   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:15.787272   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:15.787272   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:15.787272   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:15.792151   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:16.289652   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:16.289761   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:16.289761   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:16.289761   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:16.295151   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:16.778744   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:16.779014   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:16.779014   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:16.779014   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:16.783589   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:17.281211   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:17.281211   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:17.281306   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:17.281306   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:17.286782   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:17.777251   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:17.777346   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:17.777346   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:17.777436   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:17.801067   13232 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0722 00:34:17.802513   13232 node_ready.go:53] node "ha-474700-m02" has status "Ready":"False"
	I0722 00:34:18.281552   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:18.281552   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:18.281552   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:18.281552   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:18.288076   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:18.783473   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:18.783585   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:18.783585   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:18.783585   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:18.789229   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:19.285966   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:19.285966   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.285966   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.286089   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.291884   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:19.292156   13232 node_ready.go:49] node "ha-474700-m02" has status "Ready":"True"
	I0722 00:34:19.292708   13232 node_ready.go:38] duration metric: took 23.5166943s for node "ha-474700-m02" to be "Ready" ...
	I0722 00:34:19.292708   13232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:34:19.292958   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:34:19.292958   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.292958   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.292958   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.299065   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:19.309537   13232 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.309537   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fwrd4
	I0722 00:34:19.309537   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.309537   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.309537   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.314329   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:19.315566   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:19.315566   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.315566   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.315566   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.319106   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:34:19.320406   13232 pod_ready.go:92] pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:19.320406   13232 pod_ready.go:81] duration metric: took 10.8683ms for pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.320406   13232 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.320753   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ndgcf
	I0722 00:34:19.320753   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.320849   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.320935   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.327245   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:19.328201   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:19.328201   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.328201   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.328201   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.332953   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:19.334054   13232 pod_ready.go:92] pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:19.334054   13232 pod_ready.go:81] duration metric: took 13.301ms for pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.334054   13232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.334054   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700
	I0722 00:34:19.334054   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.334054   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.334054   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.338707   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:19.338707   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:19.338707   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.338707   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.338707   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.343940   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:19.344537   13232 pod_ready.go:92] pod "etcd-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:19.344537   13232 pod_ready.go:81] duration metric: took 10.4826ms for pod "etcd-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.344537   13232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.345166   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700-m02
	I0722 00:34:19.345218   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.345218   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.345312   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.349315   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:34:19.349941   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:19.350049   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.350049   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.350049   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.353686   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:34:19.354907   13232 pod_ready.go:92] pod "etcd-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:19.354907   13232 pod_ready.go:81] duration metric: took 10.3697ms for pod "etcd-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.354907   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.492030   13232 request.go:629] Waited for 137.1222ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700
	I0722 00:34:19.492297   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700
	I0722 00:34:19.492297   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.492297   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.492297   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.498628   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:19.694640   13232 request.go:629] Waited for 195.7499ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:19.694779   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:19.694819   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.694819   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.694819   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.708584   13232 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0722 00:34:19.709146   13232 pod_ready.go:92] pod "kube-apiserver-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:19.709146   13232 pod_ready.go:81] duration metric: took 354.2348ms for pod "kube-apiserver-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.709146   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:19.899512   13232 request.go:629] Waited for 189.9726ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m02
	I0722 00:34:19.899830   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m02
	I0722 00:34:19.899830   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:19.899830   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:19.899830   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:19.905454   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:20.088127   13232 request.go:629] Waited for 180.4673ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:20.088357   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:20.088357   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:20.088357   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:20.088357   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:20.094235   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:20.095070   13232 pod_ready.go:92] pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:20.095142   13232 pod_ready.go:81] duration metric: took 385.9917ms for pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:20.095211   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:20.289630   13232 request.go:629] Waited for 194.1867ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700
	I0722 00:34:20.289932   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700
	I0722 00:34:20.289932   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:20.290014   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:20.290014   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:20.294731   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:20.494735   13232 request.go:629] Waited for 198.6017ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:20.494832   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:20.494832   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:20.494923   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:20.494923   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:20.499879   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:20.501399   13232 pod_ready.go:92] pod "kube-controller-manager-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:20.501399   13232 pod_ready.go:81] duration metric: took 406.1833ms for pod "kube-controller-manager-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:20.501399   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:20.696593   13232 request.go:629] Waited for 194.8896ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m02
	I0722 00:34:20.696697   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m02
	I0722 00:34:20.696697   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:20.696697   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:20.696786   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:20.701997   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:20.898945   13232 request.go:629] Waited for 195.2615ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:20.899162   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:20.899162   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:20.899162   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:20.899162   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:20.905760   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:20.906974   13232 pod_ready.go:92] pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:20.907055   13232 pod_ready.go:81] duration metric: took 405.6509ms for pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:20.907111   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwkpc" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:21.087719   13232 request.go:629] Waited for 180.5074ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkpc
	I0722 00:34:21.087878   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkpc
	I0722 00:34:21.087942   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:21.087942   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:21.087942   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:21.098543   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:34:21.290538   13232 request.go:629] Waited for 189.8849ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:21.290667   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:21.290667   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:21.290667   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:21.290730   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:21.295499   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:34:21.295499   13232 pod_ready.go:92] pod "kube-proxy-fwkpc" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:21.295499   13232 pod_ready.go:81] duration metric: took 388.3838ms for pod "kube-proxy-fwkpc" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:21.295499   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kmnj9" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:21.492538   13232 request.go:629] Waited for 196.8842ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kmnj9
	I0722 00:34:21.492694   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kmnj9
	I0722 00:34:21.492694   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:21.492694   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:21.492694   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:21.503618   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:34:21.697095   13232 request.go:629] Waited for 192.2779ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:21.697095   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:21.697095   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:21.697095   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:21.697095   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:21.702977   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:21.704052   13232 pod_ready.go:92] pod "kube-proxy-kmnj9" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:21.704138   13232 pod_ready.go:81] duration metric: took 408.6337ms for pod "kube-proxy-kmnj9" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:21.704243   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:21.888691   13232 request.go:629] Waited for 184.1546ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700
	I0722 00:34:21.888819   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700
	I0722 00:34:21.888819   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:21.888819   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:21.888997   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:21.894719   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:22.090649   13232 request.go:629] Waited for 194.3317ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:22.090918   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:34:22.090918   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.090918   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.090918   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.099157   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:34:22.099157   13232 pod_ready.go:92] pod "kube-scheduler-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:22.100066   13232 pod_ready.go:81] duration metric: took 395.8185ms for pod "kube-scheduler-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:22.100066   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:22.294815   13232 request.go:629] Waited for 194.6039ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m02
	I0722 00:34:22.294947   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m02
	I0722 00:34:22.294947   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.294947   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.294947   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.303916   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:34:22.497682   13232 request.go:629] Waited for 192.255ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:22.497923   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:34:22.497923   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.497923   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.497923   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.502957   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:34:22.504383   13232 pod_ready.go:92] pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:34:22.504498   13232 pod_ready.go:81] duration metric: took 404.3121ms for pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:34:22.504498   13232 pod_ready.go:38] duration metric: took 3.2115s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:34:22.504564   13232 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:34:22.517318   13232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:34:22.552204   13232 api_server.go:72] duration metric: took 27.2378505s to wait for apiserver process to appear ...
	I0722 00:34:22.552258   13232 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:34:22.552293   13232 api_server.go:253] Checking apiserver healthz at https://172.28.196.103:8443/healthz ...
	I0722 00:34:22.560133   13232 api_server.go:279] https://172.28.196.103:8443/healthz returned 200:
	ok
	I0722 00:34:22.560547   13232 round_trippers.go:463] GET https://172.28.196.103:8443/version
	I0722 00:34:22.560673   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.560673   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.560673   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.562135   13232 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 00:34:22.562135   13232 api_server.go:141] control plane version: v1.30.3
	I0722 00:34:22.562135   13232 api_server.go:131] duration metric: took 9.8424ms to wait for apiserver health ...
	I0722 00:34:22.562135   13232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:34:22.686288   13232 request.go:629] Waited for 124.1516ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:34:22.686288   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:34:22.686288   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.686288   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.686288   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.695440   13232 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 00:34:22.705822   13232 system_pods.go:59] 17 kube-system pods found
	I0722 00:34:22.705822   13232 system_pods.go:61] "coredns-7db6d8ff4d-fwrd4" [3d8cf645-4238-4079-a401-18ff3ffdbf66] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "coredns-7db6d8ff4d-ndgcf" [ce30ed50-b5a7-4742-9f83-c60ecd47dc31] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "etcd-ha-474700" [b1ca44b2-3832-4a56-8bd1-c233907d8de3] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "etcd-ha-474700-m02" [f05d667f-c484-47ec-9be9-d5fe65452238] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kindnet-kldv9" [01a2e280-762e-40bc-b79a-66e935b52f26] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kindnet-xmjbz" [c65e9a3b-0f40-4424-af70-b56d7c04018c] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-apiserver-ha-474700" [881080dc-0756-4d59-ae7f-9b1ed240dd5d] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-apiserver-ha-474700-m02" [5906cda9-2d5a-486d-acc3-babb58a51586] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-controller-manager-ha-474700" [9bbed77b-5977-48a3-9816-d3734482dd9c] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-controller-manager-ha-474700-m02" [2e24aaa1-d708-451f-bf42-9d3b887463ea] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-proxy-fwkpc" [896d5fb8-be02-42a8-8ddf-260154a34162] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-proxy-kmnj9" [6a6597e3-9ae2-43cb-8838-ce01b1e9476f] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-scheduler-ha-474700" [fc771043-36f2-49a1-9675-b647b88f692b] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-scheduler-ha-474700-m02" [dd7e08b2-b3bf-4e32-8159-73bfeb9e1c33] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-vip-ha-474700" [f6aaa6ef-c03c-4ff3-889e-dc765c688373] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "kube-vip-ha-474700-m02" [6c94d6e9-f93f-4971-ab0d-6978c39375df] Running
	I0722 00:34:22.705822   13232 system_pods.go:61] "storage-provisioner" [f289ea73-0be9-4a29-92d2-2897ee8972a6] Running
	I0722 00:34:22.705822   13232 system_pods.go:74] duration metric: took 143.6848ms to wait for pod list to return data ...
	I0722 00:34:22.705822   13232 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:34:22.891767   13232 request.go:629] Waited for 185.8247ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/default/serviceaccounts
	I0722 00:34:22.891767   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/default/serviceaccounts
	I0722 00:34:22.891767   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:22.891767   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:22.891767   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:22.898410   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:34:22.898971   13232 default_sa.go:45] found service account: "default"
	I0722 00:34:22.898971   13232 default_sa.go:55] duration metric: took 193.1472ms for default service account to be created ...
	I0722 00:34:22.899030   13232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:34:23.095386   13232 request.go:629] Waited for 196.2966ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:34:23.095386   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:34:23.095386   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:23.095386   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:23.095386   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:23.105371   13232 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 00:34:23.115240   13232 system_pods.go:86] 17 kube-system pods found
	I0722 00:34:23.115240   13232 system_pods.go:89] "coredns-7db6d8ff4d-fwrd4" [3d8cf645-4238-4079-a401-18ff3ffdbf66] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "coredns-7db6d8ff4d-ndgcf" [ce30ed50-b5a7-4742-9f83-c60ecd47dc31] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "etcd-ha-474700" [b1ca44b2-3832-4a56-8bd1-c233907d8de3] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "etcd-ha-474700-m02" [f05d667f-c484-47ec-9be9-d5fe65452238] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "kindnet-kldv9" [01a2e280-762e-40bc-b79a-66e935b52f26] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "kindnet-xmjbz" [c65e9a3b-0f40-4424-af70-b56d7c04018c] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "kube-apiserver-ha-474700" [881080dc-0756-4d59-ae7f-9b1ed240dd5d] Running
	I0722 00:34:23.115297   13232 system_pods.go:89] "kube-apiserver-ha-474700-m02" [5906cda9-2d5a-486d-acc3-babb58a51586] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-controller-manager-ha-474700" [9bbed77b-5977-48a3-9816-d3734482dd9c] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-controller-manager-ha-474700-m02" [2e24aaa1-d708-451f-bf42-9d3b887463ea] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-proxy-fwkpc" [896d5fb8-be02-42a8-8ddf-260154a34162] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-proxy-kmnj9" [6a6597e3-9ae2-43cb-8838-ce01b1e9476f] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-scheduler-ha-474700" [fc771043-36f2-49a1-9675-b647b88f692b] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-scheduler-ha-474700-m02" [dd7e08b2-b3bf-4e32-8159-73bfeb9e1c33] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-vip-ha-474700" [f6aaa6ef-c03c-4ff3-889e-dc765c688373] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "kube-vip-ha-474700-m02" [6c94d6e9-f93f-4971-ab0d-6978c39375df] Running
	I0722 00:34:23.115355   13232 system_pods.go:89] "storage-provisioner" [f289ea73-0be9-4a29-92d2-2897ee8972a6] Running
	I0722 00:34:23.115355   13232 system_pods.go:126] duration metric: took 216.3225ms to wait for k8s-apps to be running ...
	I0722 00:34:23.115355   13232 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:34:23.127417   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:34:23.160162   13232 system_svc.go:56] duration metric: took 44.8064ms WaitForService to wait for kubelet
	I0722 00:34:23.160162   13232 kubeadm.go:582] duration metric: took 27.8458012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:34:23.160284   13232 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:34:23.300291   13232 request.go:629] Waited for 139.5192ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes
	I0722 00:34:23.300291   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes
	I0722 00:34:23.300521   13232 round_trippers.go:469] Request Headers:
	I0722 00:34:23.300521   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:34:23.300521   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:34:23.309393   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:34:23.310664   13232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:34:23.310734   13232 node_conditions.go:123] node cpu capacity is 2
	I0722 00:34:23.310734   13232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:34:23.310734   13232 node_conditions.go:123] node cpu capacity is 2
	I0722 00:34:23.310832   13232 node_conditions.go:105] duration metric: took 150.5463ms to run NodePressure ...
	I0722 00:34:23.310832   13232 start.go:241] waiting for startup goroutines ...
	I0722 00:34:23.310874   13232 start.go:255] writing updated cluster config ...
	I0722 00:34:23.315219   13232 out.go:177] 
	I0722 00:34:23.333985   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:34:23.333985   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:34:23.340026   13232 out.go:177] * Starting "ha-474700-m03" control-plane node in "ha-474700" cluster
	I0722 00:34:23.342562   13232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 00:34:23.342675   13232 cache.go:56] Caching tarball of preloaded images
	I0722 00:34:23.342675   13232 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 00:34:23.342675   13232 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 00:34:23.343299   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:34:23.350424   13232 start.go:360] acquireMachinesLock for ha-474700-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 00:34:23.350424   13232 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-474700-m03"
	I0722 00:34:23.351131   13232 start.go:93] Provisioning new machine with config: &{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.200.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:34:23.351131   13232 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0722 00:34:23.356153   13232 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 00:34:23.356548   13232 start.go:159] libmachine.API.Create for "ha-474700" (driver="hyperv")
	I0722 00:34:23.356611   13232 client.go:168] LocalClient.Create starting
	I0722 00:34:23.356611   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0722 00:34:23.357232   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:34:23.357232   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:34:23.357232   13232 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0722 00:34:23.357232   13232 main.go:141] libmachine: Decoding PEM data...
	I0722 00:34:23.357232   13232 main.go:141] libmachine: Parsing certificate...
	I0722 00:34:23.357232   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0722 00:34:25.394227   13232 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0722 00:34:25.394227   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:25.394227   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0722 00:34:27.226733   13232 main.go:141] libmachine: [stdout =====>] : False
	
	I0722 00:34:27.226733   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:27.227710   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:34:28.817089   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:34:28.817089   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:28.817089   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:34:32.691791   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:34:32.691836   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:32.693393   13232 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 00:34:33.123300   13232 main.go:141] libmachine: Creating SSH key...
	I0722 00:34:33.386554   13232 main.go:141] libmachine: Creating VM...
	I0722 00:34:33.386554   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 00:34:36.419129   13232 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 00:34:36.420075   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:36.420218   13232 main.go:141] libmachine: Using switch "Default Switch"
	I0722 00:34:36.420307   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 00:34:38.254442   13232 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 00:34:38.255454   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:38.255454   13232 main.go:141] libmachine: Creating VHD
	I0722 00:34:38.255766   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0722 00:34:42.166938   13232 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1F01A6BA-8DFA-4937-A9FD-1F86FE935E68
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0722 00:34:42.166938   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:42.166938   13232 main.go:141] libmachine: Writing magic tar header
	I0722 00:34:42.166938   13232 main.go:141] libmachine: Writing SSH key tar header
	I0722 00:34:42.179873   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0722 00:34:45.474466   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:34:45.474856   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:45.474938   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\disk.vhd' -SizeBytes 20000MB
	I0722 00:34:48.099119   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:34:48.099119   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:48.099607   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-474700-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0722 00:34:51.905240   13232 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-474700-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0722 00:34:51.905240   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:51.905240   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-474700-m03 -DynamicMemoryEnabled $false
	I0722 00:34:54.255014   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:34:54.256026   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:54.256026   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-474700-m03 -Count 2
	I0722 00:34:56.548325   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:34:56.548360   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:56.548459   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-474700-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\boot2docker.iso'
	I0722 00:34:59.259003   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:34:59.259003   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:34:59.259003   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-474700-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\disk.vhd'
	I0722 00:35:02.019167   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:02.019167   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:02.019167   13232 main.go:141] libmachine: Starting VM...
	I0722 00:35:02.019457   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-474700-m03
	I0722 00:35:05.309376   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:05.309376   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:05.309807   13232 main.go:141] libmachine: Waiting for host to start...
	I0722 00:35:05.309917   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:07.727223   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:07.727223   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:07.728041   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:10.419238   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:10.419484   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:11.421311   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:13.753573   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:13.753695   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:13.753695   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:16.361010   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:16.361010   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:17.375290   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:19.703675   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:19.703875   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:19.703981   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:22.383816   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:22.384600   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:23.385776   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:25.704946   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:25.704946   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:25.705090   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:28.429512   13232 main.go:141] libmachine: [stdout =====>] : 
	I0722 00:35:28.429512   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:29.443177   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:31.791613   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:31.791613   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:31.791860   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:34.477469   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:35:34.478086   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:34.478086   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:36.720307   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:36.720307   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:36.720477   13232 machine.go:94] provisionDockerMachine start ...
	I0722 00:35:36.720584   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:39.000916   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:39.000916   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:39.001186   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:41.632817   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:35:41.632934   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:41.638726   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:35:41.654435   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:35:41.654545   13232 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 00:35:41.795334   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 00:35:41.795467   13232 buildroot.go:166] provisioning hostname "ha-474700-m03"
	I0722 00:35:41.795697   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:44.040199   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:44.040398   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:44.040398   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:46.690918   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:35:46.690918   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:46.697209   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:35:46.697408   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:35:46.697408   13232 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-474700-m03 && echo "ha-474700-m03" | sudo tee /etc/hostname
	I0722 00:35:46.866437   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-474700-m03
	
	I0722 00:35:46.866528   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:49.130201   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:49.130201   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:49.130790   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:51.854045   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:35:51.854347   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:51.860156   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:35:51.860909   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:35:51.860909   13232 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-474700-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-474700-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-474700-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 00:35:52.020476   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 00:35:52.020476   13232 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0722 00:35:52.020584   13232 buildroot.go:174] setting up certificates
	I0722 00:35:52.020584   13232 provision.go:84] configureAuth start
	I0722 00:35:52.020655   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:54.256801   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:54.256801   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:54.257165   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:35:56.925731   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:35:56.925731   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:56.926099   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:35:59.201223   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:35:59.201223   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:35:59.202413   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:01.870489   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:01.870615   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:01.870615   13232 provision.go:143] copyHostCerts
	I0722 00:36:01.870776   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0722 00:36:01.871123   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0722 00:36:01.871123   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0722 00:36:01.871620   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0722 00:36:01.872777   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0722 00:36:01.873144   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0722 00:36:01.873208   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0722 00:36:01.873688   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0722 00:36:01.874321   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0722 00:36:01.874889   13232 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0722 00:36:01.874889   13232 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0722 00:36:01.875271   13232 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0722 00:36:01.876368   13232 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-474700-m03 san=[127.0.0.1 172.28.196.120 ha-474700-m03 localhost minikube]
	I0722 00:36:02.112908   13232 provision.go:177] copyRemoteCerts
	I0722 00:36:02.124908   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 00:36:02.124908   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:04.336044   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:04.336044   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:04.336906   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:07.043170   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:07.043170   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:07.043170   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\id_rsa Username:docker}
	I0722 00:36:07.169898   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0444428s)
	I0722 00:36:07.169967   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0722 00:36:07.170458   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 00:36:07.215611   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0722 00:36:07.215611   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 00:36:07.262794   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0722 00:36:07.263373   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 00:36:07.312960   13232 provision.go:87] duration metric: took 15.2921453s to configureAuth
	I0722 00:36:07.312960   13232 buildroot.go:189] setting minikube options for container-runtime
	I0722 00:36:07.313623   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:36:07.313623   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:09.599109   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:09.599109   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:09.599174   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:12.302682   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:12.303681   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:12.309238   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:36:12.309780   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:36:12.309780   13232 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 00:36:12.450442   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 00:36:12.450442   13232 buildroot.go:70] root file system type: tmpfs
	I0722 00:36:12.450792   13232 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 00:36:12.450960   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:14.677118   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:14.677118   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:14.677118   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:17.315714   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:17.316356   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:17.321739   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:36:17.322202   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:36:17.322354   13232 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.196.103"
	Environment="NO_PROXY=172.28.196.103,172.28.200.182"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 00:36:17.487480   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.196.103
	Environment=NO_PROXY=172.28.196.103,172.28.200.182
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 00:36:17.487685   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:19.737704   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:19.737704   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:19.737704   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:22.414646   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:22.414766   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:22.420167   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:36:22.421138   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:36:22.421200   13232 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 00:36:24.713630   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 00:36:24.713630   13232 machine.go:97] duration metric: took 47.9925701s to provisionDockerMachine
	I0722 00:36:24.713630   13232 client.go:171] duration metric: took 2m1.3555374s to LocalClient.Create
	I0722 00:36:24.713630   13232 start.go:167] duration metric: took 2m1.355601s to libmachine.API.Create "ha-474700"
	I0722 00:36:24.713630   13232 start.go:293] postStartSetup for "ha-474700-m03" (driver="hyperv")
	I0722 00:36:24.713630   13232 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 00:36:24.727972   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 00:36:24.727972   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:26.940838   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:26.940838   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:26.941378   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:29.631168   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:29.631383   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:29.631504   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\id_rsa Username:docker}
	I0722 00:36:29.736594   13232 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0085611s)
	I0722 00:36:29.749465   13232 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 00:36:29.756356   13232 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 00:36:29.756493   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0722 00:36:29.756572   13232 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0722 00:36:29.757890   13232 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0722 00:36:29.757890   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0722 00:36:29.769568   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 00:36:29.789216   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0722 00:36:29.836118   13232 start.go:296] duration metric: took 5.122426s for postStartSetup
	I0722 00:36:29.839300   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:32.065395   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:32.065439   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:32.065520   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:34.713090   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:34.713637   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:34.713931   13232 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\config.json ...
	I0722 00:36:34.716519   13232 start.go:128] duration metric: took 2m11.3637463s to createHost
	I0722 00:36:34.716519   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:36.933876   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:36.933876   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:36.933876   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:39.565291   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:39.565291   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:39.571840   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:36:39.572638   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:36:39.572638   13232 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 00:36:39.708100   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721608599.724808444
	
	I0722 00:36:39.708212   13232 fix.go:216] guest clock: 1721608599.724808444
	I0722 00:36:39.708212   13232 fix.go:229] Guest: 2024-07-22 00:36:39.724808444 +0000 UTC Remote: 2024-07-22 00:36:34.7165199 +0000 UTC m=+595.638912801 (delta=5.008288544s)
	I0722 00:36:39.708212   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:41.916852   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:41.917908   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:41.917961   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:44.552831   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:44.552831   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:44.558744   13232 main.go:141] libmachine: Using SSH client type: native
	I0722 00:36:44.559349   13232 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.196.120 22 <nil> <nil>}
	I0722 00:36:44.559349   13232 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721608599
	I0722 00:36:44.713095   13232 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jul 22 00:36:39 UTC 2024
	
	I0722 00:36:44.713177   13232 fix.go:236] clock set: Mon Jul 22 00:36:39 UTC 2024
	 (err=<nil>)
	I0722 00:36:44.713177   13232 start.go:83] releasing machines lock for "ha-474700-m03", held for 2m21.3605001s
	I0722 00:36:44.713432   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:46.924525   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:46.924525   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:46.925070   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:49.598394   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:49.598394   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:49.601724   13232 out.go:177] * Found network options:
	I0722 00:36:49.603968   13232 out.go:177]   - NO_PROXY=172.28.196.103,172.28.200.182
	W0722 00:36:49.606780   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 00:36:49.606780   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 00:36:49.609202   13232 out.go:177]   - NO_PROXY=172.28.196.103,172.28.200.182
	W0722 00:36:49.611812   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 00:36:49.611846   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 00:36:49.612298   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 00:36:49.612298   13232 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 00:36:49.616247   13232 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0722 00:36:49.616380   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:49.627120   13232 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 00:36:49.627120   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700-m03 ).state
	I0722 00:36:51.932258   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:51.932258   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:51.932258   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:51.972481   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:36:51.972481   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:51.972559   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700-m03 ).networkadapters[0]).ipaddresses[0]
	I0722 00:36:54.775594   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:54.775697   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:54.776000   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\id_rsa Username:docker}
	I0722 00:36:54.803485   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.120
	
	I0722 00:36:54.804081   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:36:54.804455   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700-m03\id_rsa Username:docker}
	I0722 00:36:54.873927   13232 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2575819s)
	W0722 00:36:54.874104   13232 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0722 00:36:54.907679   13232 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2804958s)
	W0722 00:36:54.907679   13232 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 00:36:54.919797   13232 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 00:36:54.950791   13232 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 00:36:54.950791   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:36:54.950791   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0722 00:36:54.990421   13232 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0722 00:36:54.990421   13232 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0722 00:36:55.000715   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 00:36:55.034751   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 00:36:55.060227   13232 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 00:36:55.071665   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 00:36:55.103004   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:36:55.135635   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 00:36:55.168288   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 00:36:55.199840   13232 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 00:36:55.232583   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 00:36:55.268384   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 00:36:55.299531   13232 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 00:36:55.329997   13232 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 00:36:55.362518   13232 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 00:36:55.393645   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:36:55.609056   13232 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 00:36:55.642610   13232 start.go:495] detecting cgroup driver to use...
	I0722 00:36:55.654767   13232 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 00:36:55.692272   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:36:55.732504   13232 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 00:36:55.772466   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 00:36:55.807707   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:36:55.842765   13232 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 00:36:55.902280   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 00:36:55.925612   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 00:36:55.973614   13232 ssh_runner.go:195] Run: which cri-dockerd
	I0722 00:36:55.989732   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 00:36:56.007499   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 00:36:56.050425   13232 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 00:36:56.250577   13232 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 00:36:56.453617   13232 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 00:36:56.453617   13232 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 00:36:56.504327   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:36:56.707125   13232 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 00:36:59.313427   13232 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6062709s)
	I0722 00:36:59.325120   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 00:36:59.363158   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:36:59.399933   13232 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 00:36:59.603909   13232 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 00:36:59.823122   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:37:00.028193   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 00:37:00.068537   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 00:37:00.110280   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:37:00.318684   13232 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 00:37:00.430119   13232 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 00:37:00.443023   13232 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 00:37:00.451863   13232 start.go:563] Will wait 60s for crictl version
	I0722 00:37:00.463543   13232 ssh_runner.go:195] Run: which crictl
	I0722 00:37:00.479465   13232 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 00:37:00.538583   13232 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 00:37:00.549369   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:37:00.594728   13232 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 00:37:00.637511   13232 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 00:37:00.643968   13232 out.go:177]   - env NO_PROXY=172.28.196.103
	I0722 00:37:00.646205   13232 out.go:177]   - env NO_PROXY=172.28.196.103,172.28.200.182
	I0722 00:37:00.649291   13232 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0722 00:37:00.652750   13232 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0722 00:37:00.652750   13232 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0722 00:37:00.653779   13232 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0722 00:37:00.653779   13232 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e8:0a:ec Flags:up|broadcast|multicast|running}
	I0722 00:37:00.656696   13232 ip.go:210] interface addr: fe80::cedd:59ec:4db2:d0bf/64
	I0722 00:37:00.656696   13232 ip.go:210] interface addr: 172.28.192.1/20
	I0722 00:37:00.668281   13232 ssh_runner.go:195] Run: grep 172.28.192.1	host.minikube.internal$ /etc/hosts
	I0722 00:37:00.674832   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:37:00.697489   13232 mustload.go:65] Loading cluster: ha-474700
	I0722 00:37:00.698191   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:37:00.698869   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:37:02.872420   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:37:02.872420   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:37:02.872420   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:37:02.874108   13232 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700 for IP: 172.28.196.120
	I0722 00:37:02.874108   13232 certs.go:194] generating shared ca certs ...
	I0722 00:37:02.874200   13232 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:37:02.874466   13232 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0722 00:37:02.875107   13232 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0722 00:37:02.875202   13232 certs.go:256] generating profile certs ...
	I0722 00:37:02.875556   13232 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\client.key
	I0722 00:37:02.875556   13232 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.0446dbb5
	I0722 00:37:02.876311   13232 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.0446dbb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.196.103 172.28.200.182 172.28.196.120 172.28.207.254]
	I0722 00:37:03.305979   13232 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.0446dbb5 ...
	I0722 00:37:03.305979   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.0446dbb5: {Name:mkdaa609a243c04f8e19fadeebb19c304ceabc4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:37:03.307551   13232 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.0446dbb5 ...
	I0722 00:37:03.307551   13232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.0446dbb5: {Name:mke4bb823a6cb6ba99c36a4f3e04a4b18f7f04a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 00:37:03.308122   13232 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt.0446dbb5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt
	I0722 00:37:03.323123   13232 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key.0446dbb5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key
	I0722 00:37:03.324114   13232 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key
	I0722 00:37:03.324114   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 00:37:03.324114   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0722 00:37:03.324853   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 00:37:03.325074   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 00:37:03.325168   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 00:37:03.325423   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 00:37:03.325556   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 00:37:03.325690   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 00:37:03.326182   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem (1338 bytes)
	W0722 00:37:03.326539   13232 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100_empty.pem, impossibly tiny 0 bytes
	I0722 00:37:03.326678   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0722 00:37:03.327089   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0722 00:37:03.327490   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0722 00:37:03.327882   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0722 00:37:03.328426   13232 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem (1708 bytes)
	I0722 00:37:03.328682   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:37:03.328913   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem -> /usr/share/ca-certificates/5100.pem
	I0722 00:37:03.329086   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /usr/share/ca-certificates/51002.pem
	I0722 00:37:03.329295   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:37:05.596764   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:37:05.597074   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:37:05.597074   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:37:08.268556   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:37:08.268556   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:37:08.269481   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:37:08.368656   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0722 00:37:08.375810   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0722 00:37:08.412061   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0722 00:37:08.418523   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0722 00:37:08.450426   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0722 00:37:08.456396   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0722 00:37:08.488560   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0722 00:37:08.494568   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0722 00:37:08.533867   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0722 00:37:08.540722   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0722 00:37:08.573009   13232 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0722 00:37:08.580294   13232 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0722 00:37:08.600750   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 00:37:08.650658   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 00:37:08.700308   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 00:37:08.748824   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 00:37:08.799916   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0722 00:37:08.847179   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 00:37:08.893251   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 00:37:08.942053   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-474700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 00:37:08.988247   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 00:37:09.034533   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem --> /usr/share/ca-certificates/5100.pem (1338 bytes)
	I0722 00:37:09.084353   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /usr/share/ca-certificates/51002.pem (1708 bytes)
	I0722 00:37:09.131526   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0722 00:37:09.164583   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0722 00:37:09.195841   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0722 00:37:09.228611   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0722 00:37:09.266275   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0722 00:37:09.298934   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0722 00:37:09.334701   13232 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0722 00:37:09.382221   13232 ssh_runner.go:195] Run: openssl version
	I0722 00:37:09.402243   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 00:37:09.434432   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:37:09.441388   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:37:09.454195   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 00:37:09.476834   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 00:37:09.507828   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5100.pem && ln -fs /usr/share/ca-certificates/5100.pem /etc/ssl/certs/5100.pem"
	I0722 00:37:09.540792   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5100.pem
	I0722 00:37:09.548223   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:45 /usr/share/ca-certificates/5100.pem
	I0722 00:37:09.560855   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5100.pem
	I0722 00:37:09.582630   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5100.pem /etc/ssl/certs/51391683.0"
	I0722 00:37:09.616481   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51002.pem && ln -fs /usr/share/ca-certificates/51002.pem /etc/ssl/certs/51002.pem"
	I0722 00:37:09.652231   13232 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51002.pem
	I0722 00:37:09.659905   13232 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:45 /usr/share/ca-certificates/51002.pem
	I0722 00:37:09.675056   13232 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51002.pem
	I0722 00:37:09.695983   13232 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51002.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 00:37:09.730578   13232 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 00:37:09.736721   13232 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 00:37:09.736721   13232 kubeadm.go:934] updating node {m03 172.28.196.120 8443 v1.30.3 docker true true} ...
	I0722 00:37:09.736721   13232 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-474700-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.196.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 00:37:09.736721   13232 kube-vip.go:115] generating kube-vip config ...
	I0722 00:37:09.749142   13232 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 00:37:09.780407   13232 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 00:37:09.780557   13232 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 00:37:09.792701   13232 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 00:37:09.809384   13232 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0722 00:37:09.821178   13232 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0722 00:37:09.838950   13232 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0722 00:37:09.839117   13232 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0722 00:37:09.839117   13232 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0722 00:37:09.839340   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 00:37:09.839424   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 00:37:09.853647   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:37:09.856286   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 00:37:09.857238   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 00:37:09.879596   13232 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 00:37:09.879596   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0722 00:37:09.879720   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0722 00:37:09.879879   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0722 00:37:09.879974   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0722 00:37:09.892202   13232 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 00:37:09.943022   13232 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0722 00:37:09.943022   13232 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0722 00:37:11.307393   13232 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0722 00:37:11.326748   13232 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0722 00:37:11.359910   13232 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 00:37:11.390159   13232 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0722 00:37:11.448634   13232 ssh_runner.go:195] Run: grep 172.28.207.254	control-plane.minikube.internal$ /etc/hosts
	I0722 00:37:11.455556   13232 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 00:37:11.489914   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:37:11.696729   13232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:37:11.728415   13232 host.go:66] Checking if "ha-474700" exists ...
	I0722 00:37:11.729296   13232 start.go:317] joinCluster: &{Name:ha-474700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-474700 Namespace:default APIServerHAVIP:172.28.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.196.103 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.200.182 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.196.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 00:37:11.729296   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0722 00:37:11.729296   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-474700 ).state
	I0722 00:37:13.948953   13232 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 00:37:13.948953   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:37:13.948953   13232 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-474700 ).networkadapters[0]).ipaddresses[0]
	I0722 00:37:16.621499   13232 main.go:141] libmachine: [stdout =====>] : 172.28.196.103
	
	I0722 00:37:16.621499   13232 main.go:141] libmachine: [stderr =====>] : 
	I0722 00:37:16.622237   13232 sshutil.go:53] new ssh client: &{IP:172.28.196.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-474700\id_rsa Username:docker}
	I0722 00:37:16.846791   13232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1174336s)
	I0722 00:37:16.846861   13232 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.196.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:37:16.846861   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zb3440.bxdiafssaaw02ybu --discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-474700-m03 --control-plane --apiserver-advertise-address=172.28.196.120 --apiserver-bind-port=8443"
	I0722 00:38:02.410490   13232 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zb3440.bxdiafssaaw02ybu --discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-474700-m03 --control-plane --apiserver-advertise-address=172.28.196.120 --apiserver-bind-port=8443": (45.562521s)
	I0722 00:38:02.410561   13232 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0722 00:38:03.269624   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-474700-m03 minikube.k8s.io/updated_at=2024_07_22T00_38_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=ha-474700 minikube.k8s.io/primary=false
	I0722 00:38:03.487278   13232 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-474700-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0722 00:38:03.665811   13232 start.go:319] duration metric: took 51.9358938s to joinCluster
	I0722 00:38:03.665811   13232 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.28.196.120 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 00:38:03.666859   13232 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:38:03.668852   13232 out.go:177] * Verifying Kubernetes components...
	I0722 00:38:03.685256   13232 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 00:38:04.113288   13232 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 00:38:04.144694   13232 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:38:04.145335   13232 kapi.go:59] client config for ha-474700: &rest.Config{Host:"https://172.28.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-474700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2085e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0722 00:38:04.145335   13232 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.207.254:8443 with https://172.28.196.103:8443
	I0722 00:38:04.146904   13232 node_ready.go:35] waiting up to 6m0s for node "ha-474700-m03" to be "Ready" ...
	I0722 00:38:04.147085   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:04.147126   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:04.147126   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:04.147155   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:04.160448   13232 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0722 00:38:04.648809   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:04.648809   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:04.648809   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:04.648809   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:04.655027   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:05.152771   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:05.153032   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:05.153032   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:05.153032   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:05.157231   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:05.662243   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:05.662998   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:05.662998   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:05.662998   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:05.668909   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:06.152095   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:06.152366   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:06.152366   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:06.152366   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:06.172595   13232 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0722 00:38:06.174473   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:06.658843   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:06.658843   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:06.659092   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:06.659092   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:06.666843   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:38:07.148263   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:07.148263   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:07.148263   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:07.148263   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:07.152609   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:07.655165   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:07.655165   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:07.655165   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:07.655165   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:07.660852   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:08.158685   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:08.158685   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:08.159011   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:08.159011   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:08.163160   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:08.661908   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:08.662002   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:08.662002   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:08.662002   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:08.667587   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:08.668405   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:09.153011   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:09.153011   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:09.153011   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:09.153011   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:09.158456   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:09.657814   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:09.657814   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:09.657814   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:09.657814   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:09.662816   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:10.149246   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:10.149246   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:10.149246   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:10.149619   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:10.158961   13232 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 00:38:10.657915   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:10.657915   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:10.657915   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:10.657915   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:10.666161   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:38:11.147483   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:11.147483   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:11.147483   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:11.147483   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:11.262163   13232 round_trippers.go:574] Response Status: 200 OK in 114 milliseconds
	I0722 00:38:11.262740   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:11.650649   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:11.650649   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:11.650649   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:11.651008   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:11.655572   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:12.156098   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:12.156337   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:12.156337   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:12.156545   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:12.162223   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:12.657701   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:12.657701   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:12.657701   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:12.657701   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:12.662286   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:13.149241   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:13.149364   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:13.149364   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:13.149364   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:13.161562   13232 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0722 00:38:13.653853   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:13.653918   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:13.653918   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:13.653918   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:13.658666   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:13.659291   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:14.159724   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:14.159724   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:14.159724   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:14.159724   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:14.165102   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:14.648902   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:14.648902   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:14.648902   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:14.649030   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:14.653257   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:15.154609   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:15.154609   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:15.154609   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:15.154609   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:15.159229   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:15.654999   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:15.655133   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:15.655133   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:15.655133   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:15.660544   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:15.661307   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:16.154592   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:16.154592   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:16.154592   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:16.154708   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:16.159210   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:16.656779   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:16.656779   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:16.656779   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:16.656779   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:16.662583   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:17.159558   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:17.159558   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:17.159558   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:17.159558   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:17.166313   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:17.660634   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:17.660634   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:17.660634   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:17.660634   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:17.666208   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:17.667111   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:18.147893   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:18.147893   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:18.147893   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:18.147893   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:18.163443   13232 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0722 00:38:18.649657   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:18.649657   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:18.649657   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:18.649955   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:18.655607   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:19.148473   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:19.148473   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:19.148473   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:19.148473   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:19.153134   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:19.652721   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:19.652721   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:19.652721   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:19.652721   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:19.658383   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:20.155622   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:20.155884   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:20.155884   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:20.155884   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:20.161233   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:20.163077   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:20.657723   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:20.657723   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:20.657723   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:20.658075   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:20.662896   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:21.158846   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:21.158846   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:21.158846   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:21.159070   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:21.163846   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:21.649649   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:21.649725   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:21.649725   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:21.649725   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:21.655022   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:22.149107   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:22.149192   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:22.149192   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:22.149192   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:22.153671   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:22.652291   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:22.652291   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:22.652291   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:22.652291   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:22.656605   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:22.657884   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:23.152509   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:23.152509   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:23.152509   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:23.152509   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:23.158168   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:23.652818   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:23.652925   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:23.652925   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:23.652925   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:23.658592   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:24.153018   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:24.153018   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:24.153107   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:24.153107   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:24.157770   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:24.650942   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:24.651039   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:24.651039   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:24.651097   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:24.655573   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.151260   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:25.151260   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.151260   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.151260   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.156749   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:25.157911   13232 node_ready.go:53] node "ha-474700-m03" has status "Ready":"False"
	I0722 00:38:25.650272   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:25.650272   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.650466   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.650466   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.654928   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.655999   13232 node_ready.go:49] node "ha-474700-m03" has status "Ready":"True"
	I0722 00:38:25.655999   13232 node_ready.go:38] duration metric: took 21.5088052s for node "ha-474700-m03" to be "Ready" ...
	I0722 00:38:25.655999   13232 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:38:25.655999   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:38:25.655999   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.655999   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.655999   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.666013   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:38:25.677288   13232 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.677288   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fwrd4
	I0722 00:38:25.677288   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.677288   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.677288   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.682375   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.684475   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:25.684475   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.684537   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.684537   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.687968   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:38:25.689212   13232 pod_ready.go:92] pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:25.689212   13232 pod_ready.go:81] duration metric: took 11.9237ms for pod "coredns-7db6d8ff4d-fwrd4" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.689978   13232 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.689978   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ndgcf
	I0722 00:38:25.689978   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.689978   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.689978   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.696177   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:25.696837   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:25.696837   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.696837   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.696837   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.701254   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.701254   13232 pod_ready.go:92] pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:25.702039   13232 pod_ready.go:81] duration metric: took 12.0603ms for pod "coredns-7db6d8ff4d-ndgcf" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.702039   13232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.702039   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700
	I0722 00:38:25.702164   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.702216   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.702216   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.705478   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:38:25.706161   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:25.706161   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.706161   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.706161   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.710859   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.710859   13232 pod_ready.go:92] pod "etcd-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:25.710859   13232 pod_ready.go:81] duration metric: took 8.8205ms for pod "etcd-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.710859   13232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.710859   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700-m02
	I0722 00:38:25.710859   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.710859   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.710859   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.715711   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:25.716817   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:25.716884   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.716884   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.716884   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.733660   13232 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0722 00:38:25.734689   13232 pod_ready.go:92] pod "etcd-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:25.734689   13232 pod_ready.go:81] duration metric: took 23.8299ms for pod "etcd-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.734689   13232 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:25.855541   13232 request.go:629] Waited for 120.8499ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700-m03
	I0722 00:38:25.855541   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/etcd-ha-474700-m03
	I0722 00:38:25.855824   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:25.855824   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:25.855824   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:25.860262   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:26.059985   13232 request.go:629] Waited for 198.175ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:26.059985   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:26.059985   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:26.059985   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:26.059985   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:26.065503   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:26.066795   13232 pod_ready.go:92] pod "etcd-ha-474700-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:26.066873   13232 pod_ready.go:81] duration metric: took 332.1792ms for pod "etcd-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:26.066949   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:26.263208   13232 request.go:629] Waited for 196.0499ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700
	I0722 00:38:26.263456   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700
	I0722 00:38:26.263456   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:26.263456   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:26.263456   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:26.271700   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:38:26.465216   13232 request.go:629] Waited for 192.388ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:26.465606   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:26.465606   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:26.465606   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:26.465606   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:26.472019   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:26.472582   13232 pod_ready.go:92] pod "kube-apiserver-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:26.472582   13232 pod_ready.go:81] duration metric: took 405.6286ms for pod "kube-apiserver-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:26.472644   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:26.654049   13232 request.go:629] Waited for 181.3433ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m02
	I0722 00:38:26.654049   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m02
	I0722 00:38:26.654049   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:26.654049   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:26.654049   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:26.660062   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:26.858098   13232 request.go:629] Waited for 196.2789ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:26.858272   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:26.858272   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:26.858272   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:26.858272   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:26.863587   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:26.865239   13232 pod_ready.go:92] pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:26.865298   13232 pod_ready.go:81] duration metric: took 392.6494ms for pod "kube-apiserver-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:26.865298   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:27.060426   13232 request.go:629] Waited for 194.8577ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m03
	I0722 00:38:27.060567   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-474700-m03
	I0722 00:38:27.060567   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:27.060567   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:27.060567   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:27.065028   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:27.263598   13232 request.go:629] Waited for 197.467ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:27.263710   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:27.263710   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:27.263806   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:27.263806   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:27.268161   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:27.268795   13232 pod_ready.go:92] pod "kube-apiserver-ha-474700-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:27.268795   13232 pod_ready.go:81] duration metric: took 403.4917ms for pod "kube-apiserver-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:27.268795   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:27.451204   13232 request.go:629] Waited for 182.4074ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700
	I0722 00:38:27.451204   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700
	I0722 00:38:27.451204   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:27.451204   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:27.451204   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:27.456969   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:27.656031   13232 request.go:629] Waited for 197.407ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:27.656252   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:27.656252   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:27.656364   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:27.656364   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:27.660710   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:27.662347   13232 pod_ready.go:92] pod "kube-controller-manager-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:27.662441   13232 pod_ready.go:81] duration metric: took 393.6413ms for pod "kube-controller-manager-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:27.662441   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:27.858197   13232 request.go:629] Waited for 195.5505ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m02
	I0722 00:38:27.858308   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m02
	I0722 00:38:27.858308   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:27.858308   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:27.858308   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:27.862849   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:28.061784   13232 request.go:629] Waited for 198.0686ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:28.061784   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:28.061784   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:28.061784   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:28.061784   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:28.071647   13232 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 00:38:28.072917   13232 pod_ready.go:92] pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:28.073004   13232 pod_ready.go:81] duration metric: took 410.5587ms for pod "kube-controller-manager-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:28.073004   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:28.252122   13232 request.go:629] Waited for 178.685ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m03
	I0722 00:38:28.252347   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-474700-m03
	I0722 00:38:28.252347   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:28.252347   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:28.252347   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:28.256968   13232 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 00:38:28.455908   13232 request.go:629] Waited for 196.1174ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:28.456015   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:28.456015   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:28.456015   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:28.456015   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:28.461828   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:28.463523   13232 pod_ready.go:92] pod "kube-controller-manager-ha-474700-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:28.463523   13232 pod_ready.go:81] duration metric: took 390.514ms for pod "kube-controller-manager-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:28.463577   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwkpc" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:28.658191   13232 request.go:629] Waited for 194.6112ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkpc
	I0722 00:38:28.658466   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwkpc
	I0722 00:38:28.658466   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:28.658466   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:28.658466   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:28.667133   13232 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 00:38:28.862682   13232 request.go:629] Waited for 193.8701ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:28.862927   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:28.863084   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:28.863084   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:28.863084   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:28.868753   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:28.869679   13232 pod_ready.go:92] pod "kube-proxy-fwkpc" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:28.869791   13232 pod_ready.go:81] duration metric: took 406.0971ms for pod "kube-proxy-fwkpc" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:28.869791   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kmnj9" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:29.064862   13232 request.go:629] Waited for 194.7807ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kmnj9
	I0722 00:38:29.064958   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kmnj9
	I0722 00:38:29.064958   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:29.064958   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:29.064958   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:29.070321   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:29.252245   13232 request.go:629] Waited for 180.7835ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:29.252434   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:29.252434   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:29.252565   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:29.252565   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:29.256385   13232 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 00:38:29.258077   13232 pod_ready.go:92] pod "kube-proxy-kmnj9" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:29.258149   13232 pod_ready.go:81] duration metric: took 388.3538ms for pod "kube-proxy-kmnj9" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:29.258149   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xzxkz" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:29.456485   13232 request.go:629] Waited for 198.2597ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzxkz
	I0722 00:38:29.456643   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzxkz
	I0722 00:38:29.456643   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:29.456643   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:29.456643   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:29.462353   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:29.662357   13232 request.go:629] Waited for 198.5163ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:29.662357   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:29.662357   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:29.662357   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:29.662357   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:29.667976   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:29.669124   13232 pod_ready.go:92] pod "kube-proxy-xzxkz" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:29.669192   13232 pod_ready.go:81] duration metric: took 411.0378ms for pod "kube-proxy-xzxkz" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:29.669192   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:29.864338   13232 request.go:629] Waited for 194.8009ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700
	I0722 00:38:29.864466   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700
	I0722 00:38:29.864466   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:29.864466   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:29.864466   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:29.870052   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:30.055490   13232 request.go:629] Waited for 184.0146ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:30.055703   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700
	I0722 00:38:30.055703   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.055760   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.055760   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.063124   13232 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 00:38:30.064011   13232 pod_ready.go:92] pod "kube-scheduler-ha-474700" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:30.064122   13232 pod_ready.go:81] duration metric: took 394.9251ms for pod "kube-scheduler-ha-474700" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:30.064122   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:30.262255   13232 request.go:629] Waited for 197.9449ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m02
	I0722 00:38:30.262367   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m02
	I0722 00:38:30.262487   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.262547   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.262547   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.267792   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:30.451747   13232 request.go:629] Waited for 182.0925ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:30.451845   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m02
	I0722 00:38:30.451978   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.451978   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.451978   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.457166   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:30.458502   13232 pod_ready.go:92] pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:30.458585   13232 pod_ready.go:81] duration metric: took 394.4583ms for pod "kube-scheduler-ha-474700-m02" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:30.458585   13232 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:30.654602   13232 request.go:629] Waited for 195.7114ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m03
	I0722 00:38:30.654860   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-474700-m03
	I0722 00:38:30.654860   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.655078   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.655078   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.660425   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:30.856691   13232 request.go:629] Waited for 195.1049ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:30.856822   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes/ha-474700-m03
	I0722 00:38:30.856822   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.856985   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.856985   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.862334   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:30.863509   13232 pod_ready.go:92] pod "kube-scheduler-ha-474700-m03" in "kube-system" namespace has status "Ready":"True"
	I0722 00:38:30.863634   13232 pod_ready.go:81] duration metric: took 404.9188ms for pod "kube-scheduler-ha-474700-m03" in "kube-system" namespace to be "Ready" ...
	I0722 00:38:30.863634   13232 pod_ready.go:38] duration metric: took 5.2075731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 00:38:30.863634   13232 api_server.go:52] waiting for apiserver process to appear ...
	I0722 00:38:30.875327   13232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 00:38:30.901804   13232 api_server.go:72] duration metric: took 27.2353441s to wait for apiserver process to appear ...
	I0722 00:38:30.901804   13232 api_server.go:88] waiting for apiserver healthz status ...
	I0722 00:38:30.901944   13232 api_server.go:253] Checking apiserver healthz at https://172.28.196.103:8443/healthz ...
	I0722 00:38:30.909898   13232 api_server.go:279] https://172.28.196.103:8443/healthz returned 200:
	ok
	I0722 00:38:30.910064   13232 round_trippers.go:463] GET https://172.28.196.103:8443/version
	I0722 00:38:30.910155   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:30.910155   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:30.910155   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:30.910842   13232 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0722 00:38:30.911665   13232 api_server.go:141] control plane version: v1.30.3
	I0722 00:38:30.911779   13232 api_server.go:131] duration metric: took 9.9742ms to wait for apiserver health ...
	I0722 00:38:30.911779   13232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 00:38:31.060191   13232 request.go:629] Waited for 148.4108ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:38:31.060481   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:38:31.060573   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:31.060573   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:31.060725   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:31.071032   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:38:31.081495   13232 system_pods.go:59] 24 kube-system pods found
	I0722 00:38:31.081495   13232 system_pods.go:61] "coredns-7db6d8ff4d-fwrd4" [3d8cf645-4238-4079-a401-18ff3ffdbf66] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "coredns-7db6d8ff4d-ndgcf" [ce30ed50-b5a7-4742-9f83-c60ecd47dc31] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "etcd-ha-474700" [b1ca44b2-3832-4a56-8bd1-c233907d8de3] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "etcd-ha-474700-m02" [f05d667f-c484-47ec-9be9-d5fe65452238] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "etcd-ha-474700-m03" [55948e51-5624-4969-ad9a-d702816407a6] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kindnet-kldv9" [01a2e280-762e-40bc-b79a-66e935b52f26] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kindnet-mtsts" [099a5306-0035-412a-9219-316d036b0f9e] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kindnet-xmjbz" [c65e9a3b-0f40-4424-af70-b56d7c04018c] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-apiserver-ha-474700" [881080dc-0756-4d59-ae7f-9b1ed240dd5d] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-apiserver-ha-474700-m02" [5906cda9-2d5a-486d-acc3-babb58a51586] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-apiserver-ha-474700-m03" [d47c41fd-ba5e-4754-aa37-8a6f88d5b346] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-controller-manager-ha-474700" [9bbed77b-5977-48a3-9816-d3734482dd9c] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-controller-manager-ha-474700-m02" [2e24aaa1-d708-451f-bf42-9d3b887463ea] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-controller-manager-ha-474700-m03" [6c1370c7-fc72-43f8-af93-7dd0d04fed14] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-proxy-fwkpc" [896d5fb8-be02-42a8-8ddf-260154a34162] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-proxy-kmnj9" [6a6597e3-9ae2-43cb-8838-ce01b1e9476f] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-proxy-xzxkz" [a0af0ee7-b83e-436d-9b25-04642314576a] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-scheduler-ha-474700" [fc771043-36f2-49a1-9675-b647b88f692b] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-scheduler-ha-474700-m02" [dd7e08b2-b3bf-4e32-8159-73bfeb9e1c33] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-scheduler-ha-474700-m03" [221c1654-b31a-4a72-8e3e-8659b9dff52f] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-vip-ha-474700" [f6aaa6ef-c03c-4ff3-889e-dc765c688373] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-vip-ha-474700-m02" [6c94d6e9-f93f-4971-ab0d-6978c39375df] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "kube-vip-ha-474700-m03" [f9fac82c-283d-4b13-9a7e-7a20d90262fa] Running
	I0722 00:38:31.081495   13232 system_pods.go:61] "storage-provisioner" [f289ea73-0be9-4a29-92d2-2897ee8972a6] Running
	I0722 00:38:31.081495   13232 system_pods.go:74] duration metric: took 169.7145ms to wait for pod list to return data ...
	I0722 00:38:31.081495   13232 default_sa.go:34] waiting for default service account to be created ...
	I0722 00:38:31.262477   13232 request.go:629] Waited for 180.74ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/default/serviceaccounts
	I0722 00:38:31.262477   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/default/serviceaccounts
	I0722 00:38:31.262477   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:31.262477   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:31.262477   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:31.267621   13232 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 00:38:31.268770   13232 default_sa.go:45] found service account: "default"
	I0722 00:38:31.268770   13232 default_sa.go:55] duration metric: took 187.2724ms for default service account to be created ...
	I0722 00:38:31.268770   13232 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 00:38:31.452400   13232 request.go:629] Waited for 183.4589ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:38:31.452761   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/namespaces/kube-system/pods
	I0722 00:38:31.452908   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:31.452908   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:31.452908   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:31.463047   13232 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 00:38:31.474374   13232 system_pods.go:86] 24 kube-system pods found
	I0722 00:38:31.474374   13232 system_pods.go:89] "coredns-7db6d8ff4d-fwrd4" [3d8cf645-4238-4079-a401-18ff3ffdbf66] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "coredns-7db6d8ff4d-ndgcf" [ce30ed50-b5a7-4742-9f83-c60ecd47dc31] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "etcd-ha-474700" [b1ca44b2-3832-4a56-8bd1-c233907d8de3] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "etcd-ha-474700-m02" [f05d667f-c484-47ec-9be9-d5fe65452238] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "etcd-ha-474700-m03" [55948e51-5624-4969-ad9a-d702816407a6] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kindnet-kldv9" [01a2e280-762e-40bc-b79a-66e935b52f26] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kindnet-mtsts" [099a5306-0035-412a-9219-316d036b0f9e] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kindnet-xmjbz" [c65e9a3b-0f40-4424-af70-b56d7c04018c] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-apiserver-ha-474700" [881080dc-0756-4d59-ae7f-9b1ed240dd5d] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-apiserver-ha-474700-m02" [5906cda9-2d5a-486d-acc3-babb58a51586] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-apiserver-ha-474700-m03" [d47c41fd-ba5e-4754-aa37-8a6f88d5b346] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-controller-manager-ha-474700" [9bbed77b-5977-48a3-9816-d3734482dd9c] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-controller-manager-ha-474700-m02" [2e24aaa1-d708-451f-bf42-9d3b887463ea] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-controller-manager-ha-474700-m03" [6c1370c7-fc72-43f8-af93-7dd0d04fed14] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-proxy-fwkpc" [896d5fb8-be02-42a8-8ddf-260154a34162] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-proxy-kmnj9" [6a6597e3-9ae2-43cb-8838-ce01b1e9476f] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-proxy-xzxkz" [a0af0ee7-b83e-436d-9b25-04642314576a] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-scheduler-ha-474700" [fc771043-36f2-49a1-9675-b647b88f692b] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-scheduler-ha-474700-m02" [dd7e08b2-b3bf-4e32-8159-73bfeb9e1c33] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-scheduler-ha-474700-m03" [221c1654-b31a-4a72-8e3e-8659b9dff52f] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-vip-ha-474700" [f6aaa6ef-c03c-4ff3-889e-dc765c688373] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-vip-ha-474700-m02" [6c94d6e9-f93f-4971-ab0d-6978c39375df] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "kube-vip-ha-474700-m03" [f9fac82c-283d-4b13-9a7e-7a20d90262fa] Running
	I0722 00:38:31.474374   13232 system_pods.go:89] "storage-provisioner" [f289ea73-0be9-4a29-92d2-2897ee8972a6] Running
	I0722 00:38:31.474374   13232 system_pods.go:126] duration metric: took 205.6019ms to wait for k8s-apps to be running ...
	I0722 00:38:31.474374   13232 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 00:38:31.485374   13232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 00:38:31.510377   13232 system_svc.go:56] duration metric: took 36.0026ms WaitForService to wait for kubelet
	I0722 00:38:31.510597   13232 kubeadm.go:582] duration metric: took 27.8441299s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 00:38:31.510719   13232 node_conditions.go:102] verifying NodePressure condition ...
	I0722 00:38:31.657078   13232 request.go:629] Waited for 146.2671ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.196.103:8443/api/v1/nodes
	I0722 00:38:31.657329   13232 round_trippers.go:463] GET https://172.28.196.103:8443/api/v1/nodes
	I0722 00:38:31.657329   13232 round_trippers.go:469] Request Headers:
	I0722 00:38:31.657329   13232 round_trippers.go:473]     Accept: application/json, */*
	I0722 00:38:31.657329   13232 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 00:38:31.663727   13232 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 00:38:31.665735   13232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:38:31.665807   13232 node_conditions.go:123] node cpu capacity is 2
	I0722 00:38:31.665807   13232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:38:31.665807   13232 node_conditions.go:123] node cpu capacity is 2
	I0722 00:38:31.665807   13232 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 00:38:31.665807   13232 node_conditions.go:123] node cpu capacity is 2
	I0722 00:38:31.665807   13232 node_conditions.go:105] duration metric: took 155.0858ms to run NodePressure ...
	I0722 00:38:31.665807   13232 start.go:241] waiting for startup goroutines ...
	I0722 00:38:31.665921   13232 start.go:255] writing updated cluster config ...
	I0722 00:38:31.678463   13232 ssh_runner.go:195] Run: rm -f paused
	I0722 00:38:31.826090   13232 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 00:38:31.830185   13232 out.go:177] * Done! kubectl is now configured to use "ha-474700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 22 00:30:29 ha-474700 cri-dockerd[1325]: time="2024-07-22T00:30:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b86a30eb1eabb843ab7b5b96b3ebcc7c996a734cec7dc1700c62159d7f231585/resolv.conf as [nameserver 172.28.192.1]"
	Jul 22 00:30:29 ha-474700 cri-dockerd[1325]: time="2024-07-22T00:30:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/84ea3ad80e043b4ca97319e460c8ea0e48342bb3572f4ed3e13443d422bfda00/resolv.conf as [nameserver 172.28.192.1]"
	Jul 22 00:30:29 ha-474700 cri-dockerd[1325]: time="2024-07-22T00:30:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7ef53d69b643b18ca967c22e3a84238afb9e399517b835c7fd62ca9d8875c26c/resolv.conf as [nameserver 172.28.192.1]"
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.379606934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.379928335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.379941535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.380081436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.435349551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.435569952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.435796453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.436932157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.584403433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.585152436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.585256236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:30:29 ha-474700 dockerd[1429]: time="2024-07-22T00:30:29.585833238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:39:11 ha-474700 dockerd[1429]: time="2024-07-22T00:39:11.731733898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 00:39:11 ha-474700 dockerd[1429]: time="2024-07-22T00:39:11.731850101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 00:39:11 ha-474700 dockerd[1429]: time="2024-07-22T00:39:11.731865501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:39:11 ha-474700 dockerd[1429]: time="2024-07-22T00:39:11.733063532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:39:11 ha-474700 cri-dockerd[1325]: time="2024-07-22T00:39:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/534ecae774bcffc01be9307d0d62b2037a07352cd25b841ecf7efc05df8cdefb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 22 00:39:13 ha-474700 cri-dockerd[1325]: time="2024-07-22T00:39:13Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 22 00:39:14 ha-474700 dockerd[1429]: time="2024-07-22T00:39:14.184328553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 00:39:14 ha-474700 dockerd[1429]: time="2024-07-22T00:39:14.184504556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 00:39:14 ha-474700 dockerd[1429]: time="2024-07-22T00:39:14.184539656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 00:39:14 ha-474700 dockerd[1429]: time="2024-07-22T00:39:14.185147065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6d688317ae329       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   534ecae774bcf       busybox-fc5497c4f-tdwp8
	0563e68a100e2       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   7ef53d69b643b       coredns-7db6d8ff4d-fwrd4
	a3f532f981c0c       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   b86a30eb1eabb       coredns-7db6d8ff4d-ndgcf
	685d0f839c603       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   84ea3ad80e043       storage-provisioner
	711176f77704c       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              26 minutes ago      Running             kindnet-cni               0                   f4275a0b2de2d       kindnet-kldv9
	a27150ded0e0a       55bb025d2cfa5                                                                                         26 minutes ago      Running             kube-proxy                0                   a68b96dd366f4       kube-proxy-fwkpc
	a044134e73300       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   246609b5f9dbc       kube-vip-ha-474700
	c45f67167207b       76932a3b37d7e                                                                                         27 minutes ago      Running             kube-controller-manager   0                   a2e406356cc8e       kube-controller-manager-ha-474700
	95bfa6ffee0da       1f6d574d502f3                                                                                         27 minutes ago      Running             kube-apiserver            0                   06585129fa08d       kube-apiserver-ha-474700
	e7c1294e244eb       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   308a38a9ec1f7       etcd-ha-474700
	2ada486ec6f81       3edc18e7b7672                                                                                         27 minutes ago      Running             kube-scheduler            0                   34fbd34d9f618       kube-scheduler-ha-474700
	
	
	==> coredns [0563e68a100e] <==
	[INFO] 10.244.1.2:56844 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217904s
	[INFO] 10.244.1.2:42720 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114602s
	[INFO] 10.244.1.2:48423 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127602s
	[INFO] 10.244.2.2:51009 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.122329269s
	[INFO] 10.244.2.2:50985 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063501s
	[INFO] 10.244.2.2:54792 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140302s
	[INFO] 10.244.0.4:48442 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161902s
	[INFO] 10.244.0.4:41654 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000149602s
	[INFO] 10.244.0.4:37935 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212103s
	[INFO] 10.244.0.4:57981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148702s
	[INFO] 10.244.0.4:36890 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218703s
	[INFO] 10.244.1.2:46368 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129201s
	[INFO] 10.244.1.2:52507 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166502s
	[INFO] 10.244.2.2:46303 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151502s
	[INFO] 10.244.2.2:39484 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119901s
	[INFO] 10.244.2.2:49091 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000333205s
	[INFO] 10.244.0.4:34379 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000255204s
	[INFO] 10.244.0.4:40009 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156502s
	[INFO] 10.244.0.4:43280 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084201s
	[INFO] 10.244.1.2:42604 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000418606s
	[INFO] 10.244.1.2:55399 - 5 "PTR IN 1.192.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107802s
	[INFO] 10.244.2.2:60910 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000259704s
	[INFO] 10.244.2.2:35394 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093001s
	[INFO] 10.244.0.4:53757 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000344805s
	[INFO] 10.244.0.4:39593 - 5 "PTR IN 1.192.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138302s
	
	
	==> coredns [a3f532f981c0] <==
	[INFO] 10.244.2.2:39288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248603s
	[INFO] 10.244.2.2:41768 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000093301s
	[INFO] 10.244.0.4:50057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110001s
	[INFO] 10.244.0.4:42132 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000172902s
	[INFO] 10.244.1.2:59888 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.069351904s
	[INFO] 10.244.1.2:58401 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01106196s
	[INFO] 10.244.1.2:58793 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133202s
	[INFO] 10.244.2.2:46512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113201s
	[INFO] 10.244.2.2:45345 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146702s
	[INFO] 10.244.2.2:34632 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000208403s
	[INFO] 10.244.2.2:60032 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125102s
	[INFO] 10.244.2.2:52448 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073901s
	[INFO] 10.244.0.4:59425 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000132702s
	[INFO] 10.244.0.4:43894 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000346705s
	[INFO] 10.244.0.4:53758 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192903s
	[INFO] 10.244.1.2:55849 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000199003s
	[INFO] 10.244.1.2:39483 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065101s
	[INFO] 10.244.2.2:46288 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000305505s
	[INFO] 10.244.0.4:52757 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154902s
	[INFO] 10.244.1.2:60576 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138802s
	[INFO] 10.244.1.2:59529 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126602s
	[INFO] 10.244.2.2:53578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111201s
	[INFO] 10.244.2.2:42314 - 5 "PTR IN 1.192.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100402s
	[INFO] 10.244.0.4:33210 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141502s
	[INFO] 10.244.0.4:54648 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000250803s
	
	
	==> describe nodes <==
	Name:               ha-474700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-474700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-474700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T00_29_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:29:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-474700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:56:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:54:39 +0000   Mon, 22 Jul 2024 00:29:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:54:39 +0000   Mon, 22 Jul 2024 00:29:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:54:39 +0000   Mon, 22 Jul 2024 00:29:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:54:39 +0000   Mon, 22 Jul 2024 00:30:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.196.103
	  Hostname:    ha-474700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d98a6c0c392f4a15a63a5b53be6383b5
	  System UUID:                2196853a-367c-da49-b3ac-104a8a9fbc62
	  Boot ID:                    563f6506-5094-4515-a320-c46c5ead8804
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tdwp8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-fwrd4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-ndgcf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-474700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-kldv9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-474700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-474700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-fwkpc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-474700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-474700                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-474700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-474700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-474700 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-474700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-474700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-474700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m                node-controller  Node ha-474700 event: Registered Node ha-474700 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-474700 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-474700 event: Registered Node ha-474700 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-474700 event: Registered Node ha-474700 in Controller
	
	
	Name:               ha-474700-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-474700-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-474700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T00_33_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:33:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-474700-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:56:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:54:44 +0000   Mon, 22 Jul 2024 00:33:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:54:44 +0000   Mon, 22 Jul 2024 00:33:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:54:44 +0000   Mon, 22 Jul 2024 00:33:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:54:44 +0000   Mon, 22 Jul 2024 00:34:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.200.182
	  Hostname:    ha-474700-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 daaef73654c84245b42fb87bf31f1432
	  System UUID:                a1ae7714-0c6b-5449-ade5-9be8a5aaaf08
	  Boot ID:                    297ec4e9-29de-4325-9090-d4818bd0aa55
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7fbtz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-474700-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-xmjbz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-474700-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-474700-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-kmnj9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-474700-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-474700-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-474700-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-474700-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-474700-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-474700-m02 event: Registered Node ha-474700-m02 in Controller
	  Normal  RegisteredNode           22m                node-controller  Node ha-474700-m02 event: Registered Node ha-474700-m02 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-474700-m02 event: Registered Node ha-474700-m02 in Controller
	
	
	Name:               ha-474700-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-474700-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-474700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T00_38_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:37:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-474700-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:56:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:54:47 +0000   Mon, 22 Jul 2024 00:37:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:54:47 +0000   Mon, 22 Jul 2024 00:37:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:54:47 +0000   Mon, 22 Jul 2024 00:37:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:54:47 +0000   Mon, 22 Jul 2024 00:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.196.120
	  Hostname:    ha-474700-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eab7c03af4a4829bc8269f525cc3f3b
	  System UUID:                6ff0cb45-3d8f-714a-9d6b-b1501828e840
	  Boot ID:                    9782621a-c0ae-42bf-a72e-cf6b6ea91f67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sv6jt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-474700-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-mtsts                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-474700-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-474700-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-xzxkz                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-474700-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-474700-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-474700-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-474700-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-474700-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-474700-m03 event: Registered Node ha-474700-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-474700-m03 event: Registered Node ha-474700-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-474700-m03 event: Registered Node ha-474700-m03 in Controller
	
	
	Name:               ha-474700-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-474700-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=ha-474700
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T00_43_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 00:43:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-474700-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 00:56:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 00:54:21 +0000   Mon, 22 Jul 2024 00:43:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 00:54:21 +0000   Mon, 22 Jul 2024 00:43:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 00:54:21 +0000   Mon, 22 Jul 2024 00:43:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 00:54:21 +0000   Mon, 22 Jul 2024 00:44:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.192.142
	  Hostname:    ha-474700-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2dcd89cd0e31463eb3fdde4823bb9d37
	  System UUID:                3f075717-b48d-d441-ac77-f6ce871cdfe3
	  Boot ID:                    bea7a4a2-6ee4-497b-bbc8-c17b670c4a80
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2lchd       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-jtltm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  RegisteredNode           13m                node-controller  Node ha-474700-m04 event: Registered Node ha-474700-m04 in Controller
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-474700-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-474700-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-474700-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-474700-m04 event: Registered Node ha-474700-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-474700-m04 event: Registered Node ha-474700-m04 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-474700-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.711796] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul22 00:28] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.172384] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[Jul22 00:29] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.104317] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.589506] systemd-fstab-generator[1039]: Ignoring "noauto" option for root device
	[  +0.201888] systemd-fstab-generator[1051]: Ignoring "noauto" option for root device
	[  +0.264501] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +2.962303] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.221043] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.212375] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.288709] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[ +11.462353] systemd-fstab-generator[1414]: Ignoring "noauto" option for root device
	[  +0.111118] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.935810] systemd-fstab-generator[1665]: Ignoring "noauto" option for root device
	[  +5.612328] systemd-fstab-generator[1859]: Ignoring "noauto" option for root device
	[  +0.108389] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.498692] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.725616] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[Jul22 00:30] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.498783] kauditd_printk_skb: 29 callbacks suppressed
	[Jul22 00:33] kauditd_printk_skb: 26 callbacks suppressed
	[Jul22 00:43] hrtimer: interrupt took 3461935 ns
	
	
	==> etcd [e7c1294e244e] <==
	{"level":"info","ts":"2024-07-22T00:43:42.424792Z","caller":"traceutil/trace.go:171","msg":"trace[2141836752] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2655; }","duration":"135.823974ms","start":"2024-07-22T00:43:42.288953Z","end":"2024-07-22T00:43:42.424777Z","steps":["trace[2141836752] 'agreement among raft nodes before linearized reading'  (duration: 88.9929ms)","trace[2141836752] 'range keys from in-memory index tree'  (duration: 46.651172ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T00:43:48.184761Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"2385b76ee203ad8d","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"10.891519ms"}
	{"level":"warn","ts":"2024-07-22T00:43:48.184924Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"1479c42f94363f26","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"11.06202ms"}
	{"level":"warn","ts":"2024-07-22T00:43:48.18536Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T00:43:47.876615Z","time spent":"308.741102ms","remote":"127.0.0.1:58728","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-07-22T00:43:48.202232Z","caller":"traceutil/trace.go:171","msg":"trace[987641079] transaction","detail":"{read_only:false; response_revision:2674; number_of_response:1; }","duration":"202.15533ms","start":"2024-07-22T00:43:48.000062Z","end":"2024-07-22T00:43:48.202217Z","steps":["trace[987641079] 'process raft request'  (duration: 202.059529ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:43:48.20306Z","caller":"traceutil/trace.go:171","msg":"trace[1030328423] linearizableReadLoop","detail":"{readStateIndex:3170; appliedIndex:3172; }","duration":"241.033621ms","start":"2024-07-22T00:43:47.962016Z","end":"2024-07-22T00:43:48.203049Z","steps":["trace[1030328423] 'read index received'  (duration: 241.029421ms)","trace[1030328423] 'applied index is now lower than readState.Index'  (duration: 3.4µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T00:43:48.203235Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.271324ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-474700-m04\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-07-22T00:43:48.203263Z","caller":"traceutil/trace.go:171","msg":"trace[956684648] range","detail":"{range_begin:/registry/minions/ha-474700-m04; range_end:; response_count:1; response_revision:2674; }","duration":"241.370624ms","start":"2024-07-22T00:43:47.961885Z","end":"2024-07-22T00:43:48.203255Z","steps":["trace[956684648] 'agreement among raft nodes before linearized reading'  (duration: 241.241423ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:43:48.224384Z","caller":"traceutil/trace.go:171","msg":"trace[186627974] transaction","detail":"{read_only:false; response_revision:2675; number_of_response:1; }","duration":"112.857333ms","start":"2024-07-22T00:43:48.111512Z","end":"2024-07-22T00:43:48.224369Z","steps":["trace[186627974] 'process raft request'  (duration: 112.771933ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:43:48.23068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.530035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-22T00:43:48.230875Z","caller":"traceutil/trace.go:171","msg":"trace[1521022668] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:2675; }","duration":"212.738837ms","start":"2024-07-22T00:43:48.018124Z","end":"2024-07-22T00:43:48.230863Z","steps":["trace[1521022668] 'agreement among raft nodes before linearized reading'  (duration: 212.507334ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:43:48.755125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.705365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:436"}
	{"level":"info","ts":"2024-07-22T00:43:48.75557Z","caller":"traceutil/trace.go:171","msg":"trace[518520705] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2677; }","duration":"176.21207ms","start":"2024-07-22T00:43:48.579343Z","end":"2024-07-22T00:43:48.755555Z","steps":["trace[518520705] 'range keys from in-memory index tree'  (duration: 174.146348ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:43:53.680302Z","caller":"traceutil/trace.go:171","msg":"trace[1489626510] transaction","detail":"{read_only:false; response_revision:2694; number_of_response:1; }","duration":"106.650265ms","start":"2024-07-22T00:43:53.573626Z","end":"2024-07-22T00:43:53.680277Z","steps":["trace[1489626510] 'process raft request'  (duration: 106.263561ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T00:43:54.668291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.63966ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-474700-m04\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-07-22T00:43:54.668349Z","caller":"traceutil/trace.go:171","msg":"trace[8484595] range","detail":"{range_begin:/registry/minions/ha-474700-m04; range_end:; response_count:1; response_revision:2697; }","duration":"216.798361ms","start":"2024-07-22T00:43:54.451537Z","end":"2024-07-22T00:43:54.668335Z","steps":["trace[8484595] 'range keys from in-memory index tree'  (duration: 214.976342ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T00:44:43.451851Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1999}
	{"level":"info","ts":"2024-07-22T00:44:43.510208Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1999,"took":"56.511031ms","hash":2736533109,"current-db-size-bytes":3698688,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2412544,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-22T00:44:43.510414Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2736533109,"revision":1999,"compact-revision":1088}
	{"level":"info","ts":"2024-07-22T00:49:43.492592Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2831}
	{"level":"info","ts":"2024-07-22T00:49:43.549068Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2831,"took":"55.570479ms","hash":3920402905,"current-db-size-bytes":3698688,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2277376,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-22T00:49:43.549408Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3920402905,"revision":2831,"compact-revision":1999}
	{"level":"info","ts":"2024-07-22T00:54:43.524495Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3574}
	{"level":"info","ts":"2024-07-22T00:54:43.577115Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":3574,"took":"52.016281ms","hash":4230774322,"current-db-size-bytes":3698688,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1953792,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-07-22T00:54:43.577234Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4230774322,"revision":3574,"compact-revision":2831}
	
	
	==> kernel <==
	 00:57:00 up 29 min,  0 users,  load average: 0.55, 0.56, 0.54
	Linux ha-474700 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [711176f77704] <==
	I0722 00:56:23.352800       1 main.go:322] Node ha-474700-m03 has CIDR [10.244.2.0/24] 
	I0722 00:56:33.351584       1 main.go:295] Handling node with IPs: map[172.28.196.103:{}]
	I0722 00:56:33.351769       1 main.go:299] handling current node
	I0722 00:56:33.351878       1 main.go:295] Handling node with IPs: map[172.28.200.182:{}]
	I0722 00:56:33.352021       1 main.go:322] Node ha-474700-m02 has CIDR [10.244.1.0/24] 
	I0722 00:56:33.352424       1 main.go:295] Handling node with IPs: map[172.28.196.120:{}]
	I0722 00:56:33.352769       1 main.go:322] Node ha-474700-m03 has CIDR [10.244.2.0/24] 
	I0722 00:56:33.353165       1 main.go:295] Handling node with IPs: map[172.28.192.142:{}]
	I0722 00:56:33.353269       1 main.go:322] Node ha-474700-m04 has CIDR [10.244.3.0/24] 
	I0722 00:56:43.343877       1 main.go:295] Handling node with IPs: map[172.28.196.103:{}]
	I0722 00:56:43.344155       1 main.go:299] handling current node
	I0722 00:56:43.344253       1 main.go:295] Handling node with IPs: map[172.28.200.182:{}]
	I0722 00:56:43.344361       1 main.go:322] Node ha-474700-m02 has CIDR [10.244.1.0/24] 
	I0722 00:56:43.344764       1 main.go:295] Handling node with IPs: map[172.28.196.120:{}]
	I0722 00:56:43.344852       1 main.go:322] Node ha-474700-m03 has CIDR [10.244.2.0/24] 
	I0722 00:56:43.344931       1 main.go:295] Handling node with IPs: map[172.28.192.142:{}]
	I0722 00:56:43.344944       1 main.go:322] Node ha-474700-m04 has CIDR [10.244.3.0/24] 
	I0722 00:56:53.349233       1 main.go:295] Handling node with IPs: map[172.28.200.182:{}]
	I0722 00:56:53.349352       1 main.go:322] Node ha-474700-m02 has CIDR [10.244.1.0/24] 
	I0722 00:56:53.349816       1 main.go:295] Handling node with IPs: map[172.28.196.120:{}]
	I0722 00:56:53.349903       1 main.go:322] Node ha-474700-m03 has CIDR [10.244.2.0/24] 
	I0722 00:56:53.350297       1 main.go:295] Handling node with IPs: map[172.28.192.142:{}]
	I0722 00:56:53.350333       1 main.go:322] Node ha-474700-m04 has CIDR [10.244.3.0/24] 
	I0722 00:56:53.350493       1 main.go:295] Handling node with IPs: map[172.28.196.103:{}]
	I0722 00:56:53.350754       1 main.go:299] handling current node
	
	
	==> kube-apiserver [95bfa6ffee0d] <==
	I0722 00:29:49.405417       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 00:29:49.429211       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0722 00:29:49.447744       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 00:30:02.464325       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0722 00:30:02.573467       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0722 00:37:57.290284       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0722 00:37:57.290701       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0722 00:37:57.290573       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 54µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0722 00:37:57.293743       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0722 00:37:57.294357       1 timeout.go:142] post-timeout activity - time-elapsed: 3.992103ms, PATCH "/api/v1/namespaces/default/events/ha-474700-m03.17e461f83c298cd4" result: <nil>
	E0722 00:39:18.085777       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52856: use of closed network connection
	E0722 00:39:18.809688       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52859: use of closed network connection
	E0722 00:39:19.383212       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52861: use of closed network connection
	E0722 00:39:19.995786       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52863: use of closed network connection
	E0722 00:39:20.683121       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52866: use of closed network connection
	E0722 00:39:21.240715       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52868: use of closed network connection
	E0722 00:39:21.767076       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52870: use of closed network connection
	E0722 00:39:22.321549       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52872: use of closed network connection
	E0722 00:39:22.843289       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52874: use of closed network connection
	E0722 00:39:23.804296       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52877: use of closed network connection
	E0722 00:39:34.332152       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52879: use of closed network connection
	E0722 00:39:34.836855       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52881: use of closed network connection
	E0722 00:39:45.395271       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52884: use of closed network connection
	E0722 00:39:45.914483       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52886: use of closed network connection
	E0722 00:39:56.442349       1 conn.go:339] Error on socket receive: read tcp 172.28.207.254:8443->172.28.192.1:52888: use of closed network connection
	
	
	==> kube-controller-manager [c45f67167207] <==
	I0722 00:37:57.502182       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-474700-m03"
	I0722 00:39:10.717175       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.301517ms"
	I0722 00:39:10.769463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.628245ms"
	I0722 00:39:10.771600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.603µs"
	I0722 00:39:10.791296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.701µs"
	I0722 00:39:10.815403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.302µs"
	I0722 00:39:10.817576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.301µs"
	I0722 00:39:11.053359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="228.639652ms"
	I0722 00:39:11.360312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="306.905169ms"
	E0722 00:39:11.360831       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0722 00:39:11.361185       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="240.806µs"
	I0722 00:39:11.367103       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="296.108µs"
	I0722 00:39:11.550191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.450792ms"
	I0722 00:39:11.550883       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.603µs"
	I0722 00:39:14.171082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.015963ms"
	I0722 00:39:14.171298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.7µs"
	I0722 00:39:15.228108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.038065ms"
	I0722 00:39:15.228210       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.401µs"
	I0722 00:39:15.425715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.929941ms"
	I0722 00:39:15.426308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="139.903µs"
	E0722 00:43:37.614926       1 certificate_controller.go:146] Sync csr-2vthx failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2vthx": the object has been modified; please apply your changes to the latest version and try again
	I0722 00:43:37.700300       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-474700-m04\" does not exist"
	I0722 00:43:37.757943       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-474700-m04" podCIDRs=["10.244.3.0/24"]
	I0722 00:43:42.649215       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-474700-m04"
	I0722 00:44:11.013330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-474700-m04"
	
	
	==> kube-proxy [a27150ded0e0] <==
	I0722 00:30:04.493455       1 server_linux.go:69] "Using iptables proxy"
	I0722 00:30:04.508739       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.196.103"]
	I0722 00:30:04.570836       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 00:30:04.571071       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 00:30:04.571096       1 server_linux.go:165] "Using iptables Proxier"
	I0722 00:30:04.575694       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 00:30:04.576111       1 server.go:872] "Version info" version="v1.30.3"
	I0722 00:30:04.576148       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 00:30:04.577611       1 config.go:192] "Starting service config controller"
	I0722 00:30:04.577654       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 00:30:04.577681       1 config.go:101] "Starting endpoint slice config controller"
	I0722 00:30:04.577686       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 00:30:04.580788       1 config.go:319] "Starting node config controller"
	I0722 00:30:04.580875       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 00:30:04.678860       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 00:30:04.679163       1 shared_informer.go:320] Caches are synced for service config
	I0722 00:30:04.681671       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2ada486ec6f8] <==
	W0722 00:29:46.604566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 00:29:46.605518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 00:29:46.777890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 00:29:46.778190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 00:29:46.854101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 00:29:46.854389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0722 00:29:46.928343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 00:29:46.928446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 00:29:46.973290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 00:29:46.973499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 00:29:46.993061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 00:29:46.993162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 00:29:47.037323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 00:29:47.037432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0722 00:29:47.077030       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 00:29:47.077126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0722 00:29:48.387482       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 00:43:37.878774       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2lchd\": pod kindnet-2lchd is already assigned to node \"ha-474700-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2lchd" node="ha-474700-m04"
	E0722 00:43:37.879307       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c4930b8f-7d30-48b4-9fa5-85fbdbc1e207(kube-system/kindnet-2lchd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2lchd"
	E0722 00:43:37.879444       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2lchd\": pod kindnet-2lchd is already assigned to node \"ha-474700-m04\"" pod="kube-system/kindnet-2lchd"
	I0722 00:43:37.880102       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2lchd" node="ha-474700-m04"
	E0722 00:43:38.227321       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-m5ncg\": pod kindnet-m5ncg is already assigned to node \"ha-474700-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-m5ncg" node="ha-474700-m04"
	E0722 00:43:38.227435       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d42bda40-7d4a-42e3-b9af-f97533ae5fbe(kube-system/kindnet-m5ncg) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-m5ncg"
	E0722 00:43:38.227527       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-m5ncg\": pod kindnet-m5ncg is already assigned to node \"ha-474700-m04\"" pod="kube-system/kindnet-m5ncg"
	I0722 00:43:38.228170       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-m5ncg" node="ha-474700-m04"
	
	
	==> kubelet <==
	Jul 22 00:52:49 ha-474700 kubelet[2361]: E0722 00:52:49.593075    2361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:52:49 ha-474700 kubelet[2361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:52:49 ha-474700 kubelet[2361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:52:49 ha-474700 kubelet[2361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:52:49 ha-474700 kubelet[2361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:53:49 ha-474700 kubelet[2361]: E0722 00:53:49.593644    2361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:53:49 ha-474700 kubelet[2361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:53:49 ha-474700 kubelet[2361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:53:49 ha-474700 kubelet[2361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:53:49 ha-474700 kubelet[2361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:54:49 ha-474700 kubelet[2361]: E0722 00:54:49.596443    2361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:54:49 ha-474700 kubelet[2361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:54:49 ha-474700 kubelet[2361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:54:49 ha-474700 kubelet[2361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:54:49 ha-474700 kubelet[2361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:55:49 ha-474700 kubelet[2361]: E0722 00:55:49.595029    2361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:55:49 ha-474700 kubelet[2361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:55:49 ha-474700 kubelet[2361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:55:49 ha-474700 kubelet[2361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:55:49 ha-474700 kubelet[2361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 00:56:49 ha-474700 kubelet[2361]: E0722 00:56:49.592329    2361 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 00:56:49 ha-474700 kubelet[2361]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 00:56:49 ha-474700 kubelet[2361]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 00:56:49 ha-474700 kubelet[2361]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 00:56:49 ha-474700 kubelet[2361]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:56:52.196275    4076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-474700 -n ha-474700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-474700 -n ha-474700: (12.7462892s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-474700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (39.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (59.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-5bv2m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-5bv2m -- sh -c "ping -c 1 172.28.192.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-5bv2m -- sh -c "ping -c 1 172.28.192.1": exit status 1 (10.5262136s)

                                                
                                                
-- stdout --
	PING 172.28.192.1 (172.28.192.1): 56 data bytes
	
	--- 172.28.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:35:13.560361   12648 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.28.192.1) from pod (busybox-fc5497c4f-5bv2m): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-tzrg5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-tzrg5 -- sh -c "ping -c 1 172.28.192.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-tzrg5 -- sh -c "ping -c 1 172.28.192.1": exit status 1 (10.5288985s)

                                                
                                                
-- stdout --
	PING 172.28.192.1 (172.28.192.1): 56 data bytes
	
	--- 172.28.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:35:24.611160    7776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.28.192.1) from pod (busybox-fc5497c4f-tzrg5): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-227000 -n multinode-227000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-227000 -n multinode-227000: (12.9640933s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 logs -n 25
E0722 01:35:55.633967    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 logs -n 25: (9.108864s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-856900 ssh -- ls                    | mount-start-2-856900 | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:23 UTC | 22 Jul 24 01:23 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-856900                           | mount-start-1-856900 | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:23 UTC | 22 Jul 24 01:24 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-856900 ssh -- ls                    | mount-start-2-856900 | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:24 UTC | 22 Jul 24 01:24 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-856900                           | mount-start-2-856900 | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:24 UTC | 22 Jul 24 01:24 UTC |
	| start   | -p mount-start-2-856900                           | mount-start-2-856900 | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:24 UTC | 22 Jul 24 01:26 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-856900 | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:26 UTC |                     |
	|         | --profile mount-start-2-856900 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-856900 ssh -- ls                    | mount-start-2-856900 | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:26 UTC | 22 Jul 24 01:26 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-856900                           | mount-start-2-856900 | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:26 UTC | 22 Jul 24 01:27 UTC |
	| delete  | -p mount-start-1-856900                           | mount-start-1-856900 | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:27 UTC | 22 Jul 24 01:27 UTC |
	| start   | -p multinode-227000                               | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:27 UTC | 22 Jul 24 01:34 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- apply -f                   | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- rollout                    | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- get pods -o                | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- get pods -o                | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- exec                       | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | busybox-fc5497c4f-5bv2m --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- exec                       | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | busybox-fc5497c4f-tzrg5 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- exec                       | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | busybox-fc5497c4f-5bv2m --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- exec                       | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | busybox-fc5497c4f-tzrg5 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- exec                       | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | busybox-fc5497c4f-5bv2m -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- exec                       | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | busybox-fc5497c4f-tzrg5 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- get pods -o                | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- exec                       | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | busybox-fc5497c4f-5bv2m                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- exec                       | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC |                     |
	|         | busybox-fc5497c4f-5bv2m -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.192.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- exec                       | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC | 22 Jul 24 01:35 UTC |
	|         | busybox-fc5497c4f-tzrg5                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-227000 -- exec                       | multinode-227000     | minikube6\jenkins | v1.33.1 | 22 Jul 24 01:35 UTC |                     |
	|         | busybox-fc5497c4f-tzrg5 -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.192.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 01:27:27
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 01:27:26.999560    6300 out.go:291] Setting OutFile to fd 668 ...
	I0722 01:27:27.000882    6300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:27:27.000882    6300 out.go:304] Setting ErrFile to fd 936...
	I0722 01:27:27.000882    6300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:27:27.025422    6300 out.go:298] Setting JSON to false
	I0722 01:27:27.028488    6300 start.go:129] hostinfo: {"hostname":"minikube6","uptime":126854,"bootTime":1721484792,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0722 01:27:27.029066    6300 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 01:27:27.033210    6300 out.go:177] * [multinode-227000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0722 01:27:27.038029    6300 notify.go:220] Checking for updates...
	I0722 01:27:27.040756    6300 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 01:27:27.043231    6300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 01:27:27.043936    6300 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0722 01:27:27.047429    6300 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 01:27:27.050104    6300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 01:27:27.056050    6300 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:27:27.056798    6300 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 01:27:32.388276    6300 out.go:177] * Using the hyperv driver based on user configuration
	I0722 01:27:32.392219    6300 start.go:297] selected driver: hyperv
	I0722 01:27:32.392219    6300 start.go:901] validating driver "hyperv" against <nil>
	I0722 01:27:32.392219    6300 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 01:27:32.439535    6300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 01:27:32.440361    6300 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 01:27:32.440361    6300 cni.go:84] Creating CNI manager for ""
	I0722 01:27:32.440361    6300 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0722 01:27:32.440361    6300 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 01:27:32.440920    6300 start.go:340] cluster config:
	{Name:multinode-227000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 01:27:32.441126    6300 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 01:27:32.446201    6300 out.go:177] * Starting "multinode-227000" primary control-plane node in "multinode-227000" cluster
	I0722 01:27:32.449085    6300 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 01:27:32.449085    6300 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0722 01:27:32.449085    6300 cache.go:56] Caching tarball of preloaded images
	I0722 01:27:32.449085    6300 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 01:27:32.449787    6300 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 01:27:32.450170    6300 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\config.json ...
	I0722 01:27:32.450170    6300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\config.json: {Name:mk6fe13834792db465dc431b0b7dabfe791ceecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 01:27:32.451590    6300 start.go:360] acquireMachinesLock for multinode-227000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 01:27:32.451590    6300 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-227000"
	I0722 01:27:32.451590    6300 start.go:93] Provisioning new machine with config: &{Name:multinode-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:multinode-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 01:27:32.452145    6300 start.go:125] createHost starting for "" (driver="hyperv")
	I0722 01:27:32.455259    6300 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 01:27:32.456116    6300 start.go:159] libmachine.API.Create for "multinode-227000" (driver="hyperv")
	I0722 01:27:32.456116    6300 client.go:168] LocalClient.Create starting
	I0722 01:27:32.456735    6300 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0722 01:27:32.457024    6300 main.go:141] libmachine: Decoding PEM data...
	I0722 01:27:32.457084    6300 main.go:141] libmachine: Parsing certificate...
	I0722 01:27:32.457316    6300 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0722 01:27:32.457513    6300 main.go:141] libmachine: Decoding PEM data...
	I0722 01:27:32.457513    6300 main.go:141] libmachine: Parsing certificate...
	I0722 01:27:32.457513    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0722 01:27:34.485309    6300 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0722 01:27:34.485539    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:27:34.485699    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0722 01:27:36.161248    6300 main.go:141] libmachine: [stdout =====>] : False
	
	I0722 01:27:36.161248    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:27:36.161248    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 01:27:37.616348    6300 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 01:27:37.616399    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:27:37.616399    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 01:27:41.120378    6300 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 01:27:41.120378    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:27:41.134337    6300 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 01:27:41.588747    6300 main.go:141] libmachine: Creating SSH key...
	I0722 01:27:41.806989    6300 main.go:141] libmachine: Creating VM...
	I0722 01:27:41.806989    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 01:27:44.649786    6300 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 01:27:44.649786    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:27:44.650020    6300 main.go:141] libmachine: Using switch "Default Switch"
	I0722 01:27:44.650213    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 01:27:46.361642    6300 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 01:27:46.361642    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:27:46.361642    6300 main.go:141] libmachine: Creating VHD
	I0722 01:27:46.361642    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0722 01:27:50.086880    6300 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 70D3A9F7-C48C-49F6-AE4D-341C94CB0AF4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0722 01:27:50.099564    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:27:50.099564    6300 main.go:141] libmachine: Writing magic tar header
	I0722 01:27:50.099642    6300 main.go:141] libmachine: Writing SSH key tar header
	I0722 01:27:50.112806    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0722 01:27:53.278839    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:27:53.278839    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:27:53.278839    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\disk.vhd' -SizeBytes 20000MB
	I0722 01:27:55.775840    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:27:55.787026    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:27:55.787026    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-227000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0722 01:27:59.360541    6300 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-227000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0722 01:27:59.360541    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:27:59.360541    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-227000 -DynamicMemoryEnabled $false
	I0722 01:28:01.579173    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:28:01.579173    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:01.591106    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-227000 -Count 2
	I0722 01:28:03.726929    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:28:03.738764    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:03.738875    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-227000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\boot2docker.iso'
	I0722 01:28:06.301747    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:28:06.301747    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:06.313678    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-227000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\disk.vhd'
	I0722 01:28:08.897896    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:28:08.897896    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:08.897896    6300 main.go:141] libmachine: Starting VM...
	I0722 01:28:08.909775    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-227000
	I0722 01:28:12.052679    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:28:12.052679    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:12.052746    6300 main.go:141] libmachine: Waiting for host to start...
	I0722 01:28:12.052746    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:28:14.289806    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:28:14.289806    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:14.289806    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:28:16.783645    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:28:16.795071    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:17.795694    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:28:19.962956    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:28:19.962956    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:19.963061    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:28:22.444740    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:28:22.456712    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:23.466711    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:28:25.604061    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:28:25.614809    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:25.614809    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:28:28.116814    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:28:28.116814    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:29.117678    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:28:31.319836    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:28:31.325685    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:31.326070    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:28:33.813811    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:28:33.824734    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:34.834197    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:28:37.128474    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:28:37.128474    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:37.140889    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:28:39.600133    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:28:39.600133    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:39.600133    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:28:41.672747    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:28:41.672747    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:41.684151    6300 machine.go:94] provisionDockerMachine start ...
	I0722 01:28:41.684497    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:28:43.803742    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:28:43.803742    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:43.814890    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:28:46.297365    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:28:46.297365    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:46.315627    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:28:46.324261    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.96 22 <nil> <nil>}
	I0722 01:28:46.324261    6300 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 01:28:46.449571    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 01:28:46.449734    6300 buildroot.go:166] provisioning hostname "multinode-227000"
	I0722 01:28:46.449734    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:28:48.520299    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:28:48.532361    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:48.532361    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:28:50.983556    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:28:50.994830    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:51.000823    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:28:51.000959    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.96 22 <nil> <nil>}
	I0722 01:28:51.000959    6300 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-227000 && echo "multinode-227000" | sudo tee /etc/hostname
	I0722 01:28:51.147228    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-227000
	
	I0722 01:28:51.151552    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:28:53.226138    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:28:53.226138    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:53.237442    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:28:55.737313    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:28:55.750458    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:55.756899    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:28:55.757082    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.96 22 <nil> <nil>}
	I0722 01:28:55.757082    6300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-227000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-227000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-227000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 01:28:55.906354    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 01:28:55.906354    6300 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0722 01:28:55.906354    6300 buildroot.go:174] setting up certificates
	I0722 01:28:55.906354    6300 provision.go:84] configureAuth start
	I0722 01:28:55.906354    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:28:57.988918    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:28:57.988918    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:28:58.003292    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:00.458428    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:00.458428    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:00.458428    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:02.574612    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:02.574612    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:02.589157    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:05.034010    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:05.046262    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:05.046504    6300 provision.go:143] copyHostCerts
	I0722 01:29:05.046619    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0722 01:29:05.046928    6300 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0722 01:29:05.046928    6300 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0722 01:29:05.047476    6300 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0722 01:29:05.048808    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0722 01:29:05.048999    6300 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0722 01:29:05.048999    6300 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0722 01:29:05.048999    6300 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0722 01:29:05.050397    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0722 01:29:05.050397    6300 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0722 01:29:05.050397    6300 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0722 01:29:05.050923    6300 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0722 01:29:05.052243    6300 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-227000 san=[127.0.0.1 172.28.193.96 localhost minikube multinode-227000]
	I0722 01:29:05.224350    6300 provision.go:177] copyRemoteCerts
	I0722 01:29:05.234719    6300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 01:29:05.234719    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:07.344989    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:07.356504    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:07.356868    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:09.840966    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:09.840966    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:09.853837    6300 sshutil.go:53] new ssh client: &{IP:172.28.193.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:29:09.955375    6300 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7206009s)
	I0722 01:29:09.955375    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0722 01:29:09.956492    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 01:29:10.000962    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0722 01:29:10.001468    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0722 01:29:10.056728    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0722 01:29:10.057290    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 01:29:10.103767    6300 provision.go:87] duration metric: took 14.1971101s to configureAuth
	I0722 01:29:10.103809    6300 buildroot.go:189] setting minikube options for container-runtime
	I0722 01:29:10.104508    6300 config.go:182] Loaded profile config "multinode-227000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:29:10.104615    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:12.222737    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:12.235174    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:12.235174    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:14.715899    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:14.715899    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:14.734811    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:29:14.737882    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.96 22 <nil> <nil>}
	I0722 01:29:14.737882    6300 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 01:29:14.861654    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 01:29:14.861654    6300 buildroot.go:70] root file system type: tmpfs
	I0722 01:29:14.861859    6300 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 01:29:14.862000    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:16.972636    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:16.984170    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:16.984170    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:19.454750    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:19.454750    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:19.472776    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:29:19.473524    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.96 22 <nil> <nil>}
	I0722 01:29:19.473524    6300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 01:29:19.622042    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 01:29:19.622042    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:21.703468    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:21.703468    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:21.715241    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:24.167852    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:24.167852    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:24.184785    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:29:24.185338    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.96 22 <nil> <nil>}
	I0722 01:29:24.185338    6300 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 01:29:26.334850    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 01:29:26.334850    6300 machine.go:97] duration metric: took 44.6501748s to provisionDockerMachine
	I0722 01:29:26.334850    6300 client.go:171] duration metric: took 1m53.8773857s to LocalClient.Create
	I0722 01:29:26.334850    6300 start.go:167] duration metric: took 1m53.8773857s to libmachine.API.Create "multinode-227000"
	I0722 01:29:26.334850    6300 start.go:293] postStartSetup for "multinode-227000" (driver="hyperv")
	I0722 01:29:26.334850    6300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 01:29:26.348743    6300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 01:29:26.348743    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:28.445215    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:28.445307    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:28.445307    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:30.943111    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:30.945762    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:30.946403    6300 sshutil.go:53] new ssh client: &{IP:172.28.193.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:29:31.051127    6300 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7023289s)
	I0722 01:29:31.065923    6300 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 01:29:31.076892    6300 command_runner.go:130] > NAME=Buildroot
	I0722 01:29:31.076892    6300 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0722 01:29:31.076892    6300 command_runner.go:130] > ID=buildroot
	I0722 01:29:31.077058    6300 command_runner.go:130] > VERSION_ID=2023.02.9
	I0722 01:29:31.077058    6300 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0722 01:29:31.077058    6300 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 01:29:31.077200    6300 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0722 01:29:31.077731    6300 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0722 01:29:31.077945    6300 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0722 01:29:31.078525    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0722 01:29:31.087881    6300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 01:29:31.098021    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0722 01:29:31.149793    6300 start.go:296] duration metric: took 4.8148865s for postStartSetup
	I0722 01:29:31.152976    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:33.261789    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:33.274741    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:33.274888    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:35.745242    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:35.745242    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:35.756704    6300 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\config.json ...
	I0722 01:29:35.760021    6300 start.go:128] duration metric: took 2m3.3061119s to createHost
	I0722 01:29:35.760191    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:37.897789    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:37.897789    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:37.908837    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:40.495738    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:40.506996    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:40.513102    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:29:40.513876    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.96 22 <nil> <nil>}
	I0722 01:29:40.513876    6300 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 01:29:40.636514    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721611780.653529060
	
	I0722 01:29:40.636514    6300 fix.go:216] guest clock: 1721611780.653529060
	I0722 01:29:40.636514    6300 fix.go:229] Guest: 2024-07-22 01:29:40.65352906 +0000 UTC Remote: 2024-07-22 01:29:35.7601231 +0000 UTC m=+128.919181701 (delta=4.89340596s)
	I0722 01:29:40.636514    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:42.767250    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:42.778842    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:42.778842    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:45.274581    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:45.274581    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:45.291211    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:29:45.291887    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.96 22 <nil> <nil>}
	I0722 01:29:45.291916    6300 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721611780
	I0722 01:29:45.427163    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jul 22 01:29:40 UTC 2024
	
	I0722 01:29:45.427163    6300 fix.go:236] clock set: Mon Jul 22 01:29:40 UTC 2024
	 (err=<nil>)
	I0722 01:29:45.427163    6300 start.go:83] releasing machines lock for "multinode-227000", held for 2m12.9740009s
	I0722 01:29:45.427931    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:47.499104    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:47.499104    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:47.511707    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:50.041622    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:50.041622    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:50.055195    6300 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0722 01:29:50.055195    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:50.070065    6300 ssh_runner.go:195] Run: cat /version.json
	I0722 01:29:50.070065    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:29:52.233870    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:52.233942    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:52.233942    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:52.234732    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:29:52.234732    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:52.237205    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:29:54.870255    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:54.870255    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:54.870554    6300 sshutil.go:53] new ssh client: &{IP:172.28.193.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:29:54.891763    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:29:54.891763    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:29:54.891763    6300 sshutil.go:53] new ssh client: &{IP:172.28.193.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:29:54.965892    6300 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0722 01:29:54.965980    6300 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.910727s)
	W0722 01:29:54.965980    6300 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0722 01:29:54.981246    6300 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0722 01:29:54.981246    6300 ssh_runner.go:235] Completed: cat /version.json: (4.9111239s)
	I0722 01:29:54.996805    6300 ssh_runner.go:195] Run: systemctl --version
	I0722 01:29:55.004844    6300 command_runner.go:130] > systemd 252 (252)
	I0722 01:29:55.004972    6300 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0722 01:29:55.016885    6300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 01:29:55.019596    6300 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0722 01:29:55.024831    6300 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 01:29:55.034971    6300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 01:29:55.055380    6300 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0722 01:29:55.064185    6300 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 01:29:55.064185    6300 start.go:495] detecting cgroup driver to use...
	I0722 01:29:55.064613    6300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 01:29:55.097249    6300 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0722 01:29:55.107889    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0722 01:29:55.107889    6300 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0722 01:29:55.107889    6300 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0722 01:29:55.140841    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 01:29:55.161614    6300 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 01:29:55.171864    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 01:29:55.209703    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 01:29:55.257533    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 01:29:55.292601    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 01:29:55.323755    6300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 01:29:55.353339    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 01:29:55.384181    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 01:29:55.415270    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 01:29:55.449137    6300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 01:29:55.454356    6300 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0722 01:29:55.478322    6300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 01:29:55.506534    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:29:55.698474    6300 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 01:29:55.733851    6300 start.go:495] detecting cgroup driver to use...
	I0722 01:29:55.745056    6300 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 01:29:55.771019    6300 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0722 01:29:55.771019    6300 command_runner.go:130] > [Unit]
	I0722 01:29:55.771019    6300 command_runner.go:130] > Description=Docker Application Container Engine
	I0722 01:29:55.771019    6300 command_runner.go:130] > Documentation=https://docs.docker.com
	I0722 01:29:55.771019    6300 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0722 01:29:55.771019    6300 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0722 01:29:55.771019    6300 command_runner.go:130] > StartLimitBurst=3
	I0722 01:29:55.771019    6300 command_runner.go:130] > StartLimitIntervalSec=60
	I0722 01:29:55.771019    6300 command_runner.go:130] > [Service]
	I0722 01:29:55.771019    6300 command_runner.go:130] > Type=notify
	I0722 01:29:55.771019    6300 command_runner.go:130] > Restart=on-failure
	I0722 01:29:55.771019    6300 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0722 01:29:55.771019    6300 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0722 01:29:55.771019    6300 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0722 01:29:55.771019    6300 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0722 01:29:55.771019    6300 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0722 01:29:55.771019    6300 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0722 01:29:55.771019    6300 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0722 01:29:55.771019    6300 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0722 01:29:55.771019    6300 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0722 01:29:55.771019    6300 command_runner.go:130] > ExecStart=
	I0722 01:29:55.771019    6300 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0722 01:29:55.771019    6300 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0722 01:29:55.771648    6300 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0722 01:29:55.771956    6300 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0722 01:29:55.771956    6300 command_runner.go:130] > LimitNOFILE=infinity
	I0722 01:29:55.771956    6300 command_runner.go:130] > LimitNPROC=infinity
	I0722 01:29:55.771956    6300 command_runner.go:130] > LimitCORE=infinity
	I0722 01:29:55.771956    6300 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0722 01:29:55.771956    6300 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0722 01:29:55.771956    6300 command_runner.go:130] > TasksMax=infinity
	I0722 01:29:55.771956    6300 command_runner.go:130] > TimeoutStartSec=0
	I0722 01:29:55.771956    6300 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0722 01:29:55.771956    6300 command_runner.go:130] > Delegate=yes
	I0722 01:29:55.771956    6300 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0722 01:29:55.771956    6300 command_runner.go:130] > KillMode=process
	I0722 01:29:55.771956    6300 command_runner.go:130] > [Install]
	I0722 01:29:55.771956    6300 command_runner.go:130] > WantedBy=multi-user.target
	I0722 01:29:55.786675    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 01:29:55.821113    6300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 01:29:55.867179    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 01:29:55.900984    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 01:29:55.932560    6300 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 01:29:55.993713    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 01:29:56.016439    6300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 01:29:56.050495    6300 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0722 01:29:56.060864    6300 ssh_runner.go:195] Run: which cri-dockerd
	I0722 01:29:56.064003    6300 command_runner.go:130] > /usr/bin/cri-dockerd
	I0722 01:29:56.077194    6300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 01:29:56.090167    6300 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 01:29:56.141153    6300 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 01:29:56.338637    6300 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 01:29:56.507545    6300 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 01:29:56.507883    6300 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 01:29:56.550532    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:29:56.736394    6300 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 01:29:59.274072    6300 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5317939s)
	I0722 01:29:59.285091    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 01:29:59.318715    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 01:29:59.351286    6300 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 01:29:59.544711    6300 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 01:29:59.734853    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:29:59.925261    6300 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 01:29:59.962896    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 01:30:00.000337    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:30:00.191060    6300 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 01:30:00.291822    6300 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 01:30:00.306887    6300 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 01:30:00.315886    6300 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0722 01:30:00.315886    6300 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0722 01:30:00.315886    6300 command_runner.go:130] > Device: 0,22	Inode: 878         Links: 1
	I0722 01:30:00.315886    6300 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0722 01:30:00.315886    6300 command_runner.go:130] > Access: 2024-07-22 01:30:00.232458360 +0000
	I0722 01:30:00.316052    6300 command_runner.go:130] > Modify: 2024-07-22 01:30:00.232458360 +0000
	I0722 01:30:00.316084    6300 command_runner.go:130] > Change: 2024-07-22 01:30:00.236458326 +0000
	I0722 01:30:00.316084    6300 command_runner.go:130] >  Birth: -
	I0722 01:30:00.316084    6300 start.go:563] Will wait 60s for crictl version
	I0722 01:30:00.326934    6300 ssh_runner.go:195] Run: which crictl
	I0722 01:30:00.330595    6300 command_runner.go:130] > /usr/bin/crictl
	I0722 01:30:00.344551    6300 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 01:30:00.412131    6300 command_runner.go:130] > Version:  0.1.0
	I0722 01:30:00.412131    6300 command_runner.go:130] > RuntimeName:  docker
	I0722 01:30:00.413567    6300 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0722 01:30:00.413567    6300 command_runner.go:130] > RuntimeApiVersion:  v1
	I0722 01:30:00.413567    6300 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 01:30:00.423676    6300 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 01:30:00.458717    6300 command_runner.go:130] > 27.0.3
	I0722 01:30:00.469457    6300 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 01:30:00.497739    6300 command_runner.go:130] > 27.0.3
	I0722 01:30:00.506928    6300 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 01:30:00.507168    6300 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0722 01:30:00.510606    6300 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0722 01:30:00.510606    6300 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0722 01:30:00.510606    6300 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0722 01:30:00.510606    6300 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e8:0a:ec Flags:up|broadcast|multicast|running}
	I0722 01:30:00.513710    6300 ip.go:210] interface addr: fe80::cedd:59ec:4db2:d0bf/64
	I0722 01:30:00.513710    6300 ip.go:210] interface addr: 172.28.192.1/20
	I0722 01:30:00.525497    6300 ssh_runner.go:195] Run: grep 172.28.192.1	host.minikube.internal$ /etc/hosts
	I0722 01:30:00.527453    6300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 01:30:00.552183    6300 kubeadm.go:883] updating cluster {Name:multinode-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:multinode-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.96 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 01:30:00.552321    6300 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 01:30:00.562514    6300 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 01:30:00.585877    6300 docker.go:685] Got preloaded images: 
	I0722 01:30:00.585877    6300 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0722 01:30:00.598665    6300 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 01:30:00.616513    6300 command_runner.go:139] > {"Repositories":{}}
	I0722 01:30:00.627977    6300 ssh_runner.go:195] Run: which lz4
	I0722 01:30:00.630858    6300 command_runner.go:130] > /usr/bin/lz4
	I0722 01:30:00.634471    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0722 01:30:00.645622    6300 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 01:30:00.649224    6300 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 01:30:00.653067    6300 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 01:30:00.653375    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0722 01:30:02.685206    6300 docker.go:649] duration metric: took 2.0471677s to copy over tarball
	I0722 01:30:02.696631    6300 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 01:30:11.198842    6300 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.500198s)
	I0722 01:30:11.198842    6300 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 01:30:11.261766    6300 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 01:30:11.280502    6300 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0722 01:30:11.280763    6300 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0722 01:30:11.321736    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:30:11.513647    6300 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 01:30:14.900616    6300 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3868961s)
	I0722 01:30:14.909844    6300 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 01:30:14.935533    6300 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0722 01:30:14.935533    6300 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0722 01:30:14.936606    6300 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0722 01:30:14.936606    6300 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0722 01:30:14.936606    6300 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0722 01:30:14.936606    6300 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0722 01:30:14.936606    6300 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0722 01:30:14.936606    6300 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 01:30:14.936606    6300 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0722 01:30:14.936770    6300 cache_images.go:84] Images are preloaded, skipping loading
	I0722 01:30:14.936854    6300 kubeadm.go:934] updating node { 172.28.193.96 8443 v1.30.3 docker true true} ...
	I0722 01:30:14.937091    6300 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-227000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.193.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 01:30:14.950041    6300 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0722 01:30:14.975159    6300 command_runner.go:130] > cgroupfs
	I0722 01:30:14.983334    6300 cni.go:84] Creating CNI manager for ""
	I0722 01:30:14.983391    6300 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 01:30:14.983391    6300 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 01:30:14.983391    6300 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.193.96 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-227000 NodeName:multinode-227000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.193.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.193.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 01:30:14.983391    6300 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.193.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-227000"
	  kubeletExtraArgs:
	    node-ip: 172.28.193.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.193.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 01:30:14.996795    6300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 01:30:15.014939    6300 command_runner.go:130] > kubeadm
	I0722 01:30:15.014985    6300 command_runner.go:130] > kubectl
	I0722 01:30:15.014985    6300 command_runner.go:130] > kubelet
	I0722 01:30:15.015058    6300 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 01:30:15.025422    6300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 01:30:15.029919    6300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0722 01:30:15.076889    6300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 01:30:15.093173    6300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0722 01:30:15.153805    6300 ssh_runner.go:195] Run: grep 172.28.193.96	control-plane.minikube.internal$ /etc/hosts
	I0722 01:30:15.159632    6300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.193.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 01:30:15.195989    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:30:15.418329    6300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 01:30:15.448018    6300 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000 for IP: 172.28.193.96
	I0722 01:30:15.448018    6300 certs.go:194] generating shared ca certs ...
	I0722 01:30:15.448018    6300 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 01:30:15.449117    6300 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0722 01:30:15.449232    6300 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0722 01:30:15.449755    6300 certs.go:256] generating profile certs ...
	I0722 01:30:15.450417    6300 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\client.key
	I0722 01:30:15.450544    6300 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\client.crt with IP's: []
	I0722 01:30:15.827474    6300 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\client.crt ...
	I0722 01:30:15.827474    6300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\client.crt: {Name:mk53d7ab06f365d59ecaff162b7c66792d18ffa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 01:30:15.833800    6300 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\client.key ...
	I0722 01:30:15.833800    6300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\client.key: {Name:mk5768f996dffe948bdefa127a5e0ae949b5be74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 01:30:15.835322    6300 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.key.2e7bc890
	I0722 01:30:15.836619    6300 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.crt.2e7bc890 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.193.96]
	I0722 01:30:16.045464    6300 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.crt.2e7bc890 ...
	I0722 01:30:16.045464    6300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.crt.2e7bc890: {Name:mk4a589cb7db5a02f0e638abe1a6e0d8b26e8ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 01:30:16.048021    6300 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.key.2e7bc890 ...
	I0722 01:30:16.048021    6300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.key.2e7bc890: {Name:mkca85f333a03f90257ec2fa490ea501ae0cd27c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 01:30:16.049399    6300 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.crt.2e7bc890 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.crt
	I0722 01:30:16.067543    6300 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.key.2e7bc890 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.key
	I0722 01:30:16.069134    6300 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\proxy-client.key
	I0722 01:30:16.069400    6300 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\proxy-client.crt with IP's: []
	I0722 01:30:16.316664    6300 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\proxy-client.crt ...
	I0722 01:30:16.316664    6300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\proxy-client.crt: {Name:mk35be17f1ed378217214dd74fdff652a25bcc89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 01:30:16.322758    6300 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\proxy-client.key ...
	I0722 01:30:16.322758    6300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\proxy-client.key: {Name:mk82572704a46330b78c76234675bc795526e1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 01:30:16.324284    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 01:30:16.325347    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0722 01:30:16.325646    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 01:30:16.325897    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 01:30:16.326119    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 01:30:16.326119    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 01:30:16.326119    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 01:30:16.337211    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 01:30:16.338181    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem (1338 bytes)
	W0722 01:30:16.338559    6300 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100_empty.pem, impossibly tiny 0 bytes
	I0722 01:30:16.338627    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0722 01:30:16.338978    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0722 01:30:16.339387    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0722 01:30:16.339387    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0722 01:30:16.340095    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem (1708 bytes)
	I0722 01:30:16.340095    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /usr/share/ca-certificates/51002.pem
	I0722 01:30:16.340095    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 01:30:16.340095    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem -> /usr/share/ca-certificates/5100.pem
	I0722 01:30:16.341975    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 01:30:16.386104    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 01:30:16.425292    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 01:30:16.469336    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 01:30:16.513060    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 01:30:16.563163    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 01:30:16.606948    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 01:30:16.654443    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 01:30:16.708769    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /usr/share/ca-certificates/51002.pem (1708 bytes)
	I0722 01:30:16.755671    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 01:30:16.801693    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem --> /usr/share/ca-certificates/5100.pem (1338 bytes)
	I0722 01:30:16.843962    6300 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 01:30:16.888734    6300 ssh_runner.go:195] Run: openssl version
	I0722 01:30:16.891390    6300 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0722 01:30:16.908172    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51002.pem && ln -fs /usr/share/ca-certificates/51002.pem /etc/ssl/certs/51002.pem"
	I0722 01:30:16.941855    6300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51002.pem
	I0722 01:30:16.948525    6300 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 21 23:45 /usr/share/ca-certificates/51002.pem
	I0722 01:30:16.948525    6300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:45 /usr/share/ca-certificates/51002.pem
	I0722 01:30:16.959583    6300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51002.pem
	I0722 01:30:16.963202    6300 command_runner.go:130] > 3ec20f2e
	I0722 01:30:16.980718    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51002.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 01:30:17.012590    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 01:30:17.044404    6300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 01:30:17.048139    6300 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 21 23:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 01:30:17.053493    6300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 01:30:17.064498    6300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 01:30:17.074936    6300 command_runner.go:130] > b5213941
	I0722 01:30:17.087084    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 01:30:17.135823    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5100.pem && ln -fs /usr/share/ca-certificates/5100.pem /etc/ssl/certs/5100.pem"
	I0722 01:30:17.164998    6300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5100.pem
	I0722 01:30:17.174571    6300 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 21 23:45 /usr/share/ca-certificates/5100.pem
	I0722 01:30:17.174619    6300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:45 /usr/share/ca-certificates/5100.pem
	I0722 01:30:17.186254    6300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5100.pem
	I0722 01:30:17.196297    6300 command_runner.go:130] > 51391683
	I0722 01:30:17.208611    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5100.pem /etc/ssl/certs/51391683.0"
	I0722 01:30:17.240505    6300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 01:30:17.247852    6300 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 01:30:17.247852    6300 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 01:30:17.248022    6300 kubeadm.go:392] StartCluster: {Name:multinode-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.96 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 01:30:17.256181    6300 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 01:30:17.291237    6300 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 01:30:17.298600    6300 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0722 01:30:17.308267    6300 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0722 01:30:17.308267    6300 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0722 01:30:17.319982    6300 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 01:30:17.350034    6300 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 01:30:17.353515    6300 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0722 01:30:17.353515    6300 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0722 01:30:17.353515    6300 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0722 01:30:17.353515    6300 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 01:30:17.368763    6300 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 01:30:17.368763    6300 kubeadm.go:157] found existing configuration files:
	
	I0722 01:30:17.380481    6300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 01:30:17.384492    6300 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 01:30:17.384492    6300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 01:30:17.410252    6300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 01:30:17.440509    6300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 01:30:17.443532    6300 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 01:30:17.458515    6300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 01:30:17.470534    6300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 01:30:17.500722    6300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 01:30:17.517236    6300 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 01:30:17.520441    6300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 01:30:17.530882    6300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 01:30:17.560097    6300 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 01:30:17.580363    6300 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 01:30:17.580527    6300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 01:30:17.592130    6300 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 01:30:17.614439    6300 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 01:30:17.904370    6300 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0722 01:30:17.904542    6300 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0722 01:30:17.904803    6300 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 01:30:17.904803    6300 command_runner.go:130] > [preflight] Running pre-flight checks
	I0722 01:30:18.101934    6300 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 01:30:18.101934    6300 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 01:30:18.101934    6300 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 01:30:18.101934    6300 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 01:30:18.101934    6300 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 01:30:18.101934    6300 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 01:30:18.420870    6300 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 01:30:18.420870    6300 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 01:30:18.426010    6300 out.go:204]   - Generating certificates and keys ...
	I0722 01:30:18.426251    6300 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0722 01:30:18.426251    6300 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 01:30:18.426444    6300 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0722 01:30:18.426444    6300 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 01:30:18.692433    6300 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 01:30:18.692502    6300 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0722 01:30:19.101570    6300 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0722 01:30:19.112831    6300 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0722 01:30:19.549946    6300 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0722 01:30:19.549946    6300 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0722 01:30:19.757835    6300 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0722 01:30:19.761395    6300 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0722 01:30:19.932952    6300 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0722 01:30:19.932952    6300 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0722 01:30:19.933417    6300 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-227000] and IPs [172.28.193.96 127.0.0.1 ::1]
	I0722 01:30:19.933417    6300 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-227000] and IPs [172.28.193.96 127.0.0.1 ::1]
	I0722 01:30:20.293606    6300 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0722 01:30:20.299958    6300 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0722 01:30:20.300217    6300 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-227000] and IPs [172.28.193.96 127.0.0.1 ::1]
	I0722 01:30:20.300217    6300 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-227000] and IPs [172.28.193.96 127.0.0.1 ::1]
	I0722 01:30:20.832551    6300 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 01:30:20.832551    6300 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0722 01:30:20.994489    6300 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 01:30:20.997468    6300 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0722 01:30:21.281449    6300 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0722 01:30:21.281449    6300 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0722 01:30:21.281767    6300 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 01:30:21.281805    6300 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 01:30:21.424949    6300 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 01:30:21.424949    6300 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 01:30:21.511534    6300 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 01:30:21.512758    6300 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0722 01:30:21.687770    6300 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 01:30:21.690270    6300 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 01:30:21.832892    6300 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 01:30:21.832892    6300 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 01:30:21.943236    6300 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 01:30:21.946795    6300 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 01:30:21.947096    6300 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 01:30:21.947096    6300 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 01:30:21.958307    6300 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 01:30:21.958307    6300 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 01:30:21.962102    6300 out.go:204]   - Booting up control plane ...
	I0722 01:30:21.962782    6300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 01:30:21.962782    6300 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 01:30:21.962782    6300 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 01:30:21.962782    6300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 01:30:21.963313    6300 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 01:30:21.963349    6300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 01:30:21.984568    6300 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 01:30:21.984614    6300 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 01:30:21.985741    6300 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 01:30:21.985839    6300 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 01:30:21.985922    6300 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 01:30:21.986008    6300 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0722 01:30:22.216142    6300 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 01:30:22.216142    6300 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0722 01:30:22.216454    6300 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 01:30:22.216514    6300 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 01:30:22.715013    6300 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.67134ms
	I0722 01:30:22.718086    6300 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.67134ms
	I0722 01:30:22.718397    6300 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 01:30:22.718397    6300 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0722 01:30:29.721235    6300 command_runner.go:130] > [api-check] The API server is healthy after 7.002584465s
	I0722 01:30:29.721235    6300 kubeadm.go:310] [api-check] The API server is healthy after 7.002584465s
	I0722 01:30:29.743961    6300 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 01:30:29.743961    6300 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 01:30:29.779530    6300 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 01:30:29.779530    6300 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 01:30:29.824406    6300 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0722 01:30:29.824406    6300 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 01:30:29.824963    6300 kubeadm.go:310] [mark-control-plane] Marking the node multinode-227000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 01:30:29.824995    6300 command_runner.go:130] > [mark-control-plane] Marking the node multinode-227000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 01:30:29.846100    6300 command_runner.go:130] > [bootstrap-token] Using token: erqcx3.tu6ola2z175mf4ol
	I0722 01:30:29.846100    6300 kubeadm.go:310] [bootstrap-token] Using token: erqcx3.tu6ola2z175mf4ol
	I0722 01:30:29.856161    6300 out.go:204]   - Configuring RBAC rules ...
	I0722 01:30:29.856687    6300 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 01:30:29.856724    6300 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 01:30:29.877368    6300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 01:30:29.877425    6300 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 01:30:29.896438    6300 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 01:30:29.896438    6300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 01:30:29.896812    6300 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 01:30:29.896812    6300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 01:30:29.910283    6300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 01:30:29.910283    6300 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 01:30:29.913177    6300 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 01:30:29.915809    6300 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 01:30:30.137070    6300 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 01:30:30.137070    6300 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 01:30:30.606106    6300 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 01:30:30.606106    6300 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0722 01:30:31.136989    6300 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 01:30:31.137042    6300 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0722 01:30:31.137658    6300 kubeadm.go:310] 
	I0722 01:30:31.137658    6300 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 01:30:31.137658    6300 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0722 01:30:31.137658    6300 kubeadm.go:310] 
	I0722 01:30:31.138801    6300 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0722 01:30:31.138841    6300 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 01:30:31.138936    6300 kubeadm.go:310] 
	I0722 01:30:31.139036    6300 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0722 01:30:31.139098    6300 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 01:30:31.139180    6300 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 01:30:31.139180    6300 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 01:30:31.139180    6300 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 01:30:31.139180    6300 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 01:30:31.139180    6300 kubeadm.go:310] 
	I0722 01:30:31.139180    6300 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0722 01:30:31.139180    6300 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 01:30:31.139180    6300 kubeadm.go:310] 
	I0722 01:30:31.139714    6300 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 01:30:31.139714    6300 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 01:30:31.139714    6300 kubeadm.go:310] 
	I0722 01:30:31.139939    6300 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 01:30:31.139939    6300 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0722 01:30:31.139992    6300 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 01:30:31.139992    6300 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 01:30:31.139992    6300 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 01:30:31.139992    6300 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 01:30:31.139992    6300 kubeadm.go:310] 
	I0722 01:30:31.140525    6300 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 01:30:31.140525    6300 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0722 01:30:31.140743    6300 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0722 01:30:31.140808    6300 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 01:30:31.140808    6300 kubeadm.go:310] 
	I0722 01:30:31.141095    6300 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token erqcx3.tu6ola2z175mf4ol \
	I0722 01:30:31.141095    6300 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token erqcx3.tu6ola2z175mf4ol \
	I0722 01:30:31.141310    6300 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b \
	I0722 01:30:31.141310    6300 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b \
	I0722 01:30:31.141310    6300 command_runner.go:130] > 	--control-plane 
	I0722 01:30:31.141310    6300 kubeadm.go:310] 	--control-plane 
	I0722 01:30:31.141310    6300 kubeadm.go:310] 
	I0722 01:30:31.141310    6300 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0722 01:30:31.141310    6300 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 01:30:31.141310    6300 kubeadm.go:310] 
	I0722 01:30:31.141858    6300 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token erqcx3.tu6ola2z175mf4ol \
	I0722 01:30:31.141858    6300 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token erqcx3.tu6ola2z175mf4ol \
	I0722 01:30:31.142270    6300 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b 
	I0722 01:30:31.142270    6300 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b 
	I0722 01:30:31.142270    6300 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 01:30:31.142270    6300 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 01:30:31.142905    6300 cni.go:84] Creating CNI manager for ""
	I0722 01:30:31.142988    6300 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 01:30:31.157407    6300 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0722 01:30:31.175564    6300 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0722 01:30:31.184199    6300 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0722 01:30:31.184337    6300 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0722 01:30:31.184337    6300 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0722 01:30:31.184337    6300 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0722 01:30:31.184337    6300 command_runner.go:130] > Access: 2024-07-22 01:28:35.445451500 +0000
	I0722 01:30:31.184337    6300 command_runner.go:130] > Modify: 2024-07-18 23:04:21.000000000 +0000
	I0722 01:30:31.184337    6300 command_runner.go:130] > Change: 2024-07-22 01:28:26.238000000 +0000
	I0722 01:30:31.184337    6300 command_runner.go:130] >  Birth: -
	I0722 01:30:31.184536    6300 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0722 01:30:31.184536    6300 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0722 01:30:31.236798    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0722 01:30:31.860507    6300 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0722 01:30:31.860507    6300 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0722 01:30:31.860507    6300 command_runner.go:130] > serviceaccount/kindnet created
	I0722 01:30:31.860507    6300 command_runner.go:130] > daemonset.apps/kindnet created
	I0722 01:30:31.860707    6300 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 01:30:31.875447    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:31.877325    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-227000 minikube.k8s.io/updated_at=2024_07_22T01_30_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=multinode-227000 minikube.k8s.io/primary=true
	I0722 01:30:31.895253    6300 command_runner.go:130] > -16
	I0722 01:30:31.895253    6300 ops.go:34] apiserver oom_adj: -16
	I0722 01:30:32.104435    6300 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0722 01:30:32.126076    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:32.126416    6300 command_runner.go:130] > node/multinode-227000 labeled
	I0722 01:30:32.238951    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:32.632719    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:32.749220    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:33.130985    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:33.223661    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:33.633778    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:33.779517    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:34.132154    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:34.245278    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:34.637654    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:34.746025    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:35.132714    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:35.229838    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:35.636684    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:35.746498    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:36.139998    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:36.251751    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:36.632961    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:36.746158    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:37.130091    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:37.237661    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:37.629797    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:37.737355    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:38.137344    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:38.228993    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:38.633549    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:38.764181    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:39.136312    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:39.242823    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:39.630353    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:39.760105    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:40.126595    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:40.240866    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:40.627639    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:40.785979    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:41.151699    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:41.267834    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:41.628991    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:41.744128    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:42.134254    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:42.243264    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:42.639039    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:42.776318    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:43.141801    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:43.264551    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:43.631987    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:43.764344    6300 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0722 01:30:44.139246    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 01:30:44.300261    6300 command_runner.go:130] > NAME      SECRETS   AGE
	I0722 01:30:44.300261    6300 command_runner.go:130] > default   0         0s
	I0722 01:30:44.300261    6300 kubeadm.go:1113] duration metric: took 12.439351s to wait for elevateKubeSystemPrivileges
	I0722 01:30:44.300261    6300 kubeadm.go:394] duration metric: took 27.0519246s to StartCluster
	I0722 01:30:44.300261    6300 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 01:30:44.300261    6300 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 01:30:44.302246    6300 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 01:30:44.303248    6300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0722 01:30:44.303248    6300 start.go:235] Will wait 6m0s for node &{Name: IP:172.28.193.96 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 01:30:44.303248    6300 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 01:30:44.303248    6300 addons.go:69] Setting storage-provisioner=true in profile "multinode-227000"
	I0722 01:30:44.303248    6300 addons.go:234] Setting addon storage-provisioner=true in "multinode-227000"
	I0722 01:30:44.303248    6300 addons.go:69] Setting default-storageclass=true in profile "multinode-227000"
	I0722 01:30:44.303248    6300 host.go:66] Checking if "multinode-227000" exists ...
	I0722 01:30:44.304265    6300 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-227000"
	I0722 01:30:44.304265    6300 config.go:182] Loaded profile config "multinode-227000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:30:44.304265    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:30:44.305278    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:30:44.310253    6300 out.go:177] * Verifying Kubernetes components...
	I0722 01:30:44.327269    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:30:44.696734    6300 command_runner.go:130] > apiVersion: v1
	I0722 01:30:44.696734    6300 command_runner.go:130] > data:
	I0722 01:30:44.696734    6300 command_runner.go:130] >   Corefile: |
	I0722 01:30:44.696734    6300 command_runner.go:130] >     .:53 {
	I0722 01:30:44.696734    6300 command_runner.go:130] >         errors
	I0722 01:30:44.696734    6300 command_runner.go:130] >         health {
	I0722 01:30:44.696734    6300 command_runner.go:130] >            lameduck 5s
	I0722 01:30:44.696734    6300 command_runner.go:130] >         }
	I0722 01:30:44.696734    6300 command_runner.go:130] >         ready
	I0722 01:30:44.696734    6300 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0722 01:30:44.696734    6300 command_runner.go:130] >            pods insecure
	I0722 01:30:44.696734    6300 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0722 01:30:44.696734    6300 command_runner.go:130] >            ttl 30
	I0722 01:30:44.696734    6300 command_runner.go:130] >         }
	I0722 01:30:44.696734    6300 command_runner.go:130] >         prometheus :9153
	I0722 01:30:44.696734    6300 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0722 01:30:44.696734    6300 command_runner.go:130] >            max_concurrent 1000
	I0722 01:30:44.696734    6300 command_runner.go:130] >         }
	I0722 01:30:44.696734    6300 command_runner.go:130] >         cache 30
	I0722 01:30:44.696734    6300 command_runner.go:130] >         loop
	I0722 01:30:44.696734    6300 command_runner.go:130] >         reload
	I0722 01:30:44.696734    6300 command_runner.go:130] >         loadbalance
	I0722 01:30:44.696734    6300 command_runner.go:130] >     }
	I0722 01:30:44.696734    6300 command_runner.go:130] > kind: ConfigMap
	I0722 01:30:44.696734    6300 command_runner.go:130] > metadata:
	I0722 01:30:44.696734    6300 command_runner.go:130] >   creationTimestamp: "2024-07-22T01:30:30Z"
	I0722 01:30:44.696734    6300 command_runner.go:130] >   name: coredns
	I0722 01:30:44.696734    6300 command_runner.go:130] >   namespace: kube-system
	I0722 01:30:44.696734    6300 command_runner.go:130] >   resourceVersion: "232"
	I0722 01:30:44.696734    6300 command_runner.go:130] >   uid: 86631e7e-db04-4a28-bb2d-5a66cc1ce105
	I0722 01:30:44.708343    6300 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0722 01:30:44.773069    6300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 01:30:45.314509    6300 command_runner.go:130] > configmap/coredns replaced
	I0722 01:30:45.314714    6300 start.go:971] {"host.minikube.internal": 172.28.192.1} host record injected into CoreDNS's ConfigMap
	I0722 01:30:45.316794    6300 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 01:30:45.317712    6300 kapi.go:59] client config for multinode-227000: &rest.Config{Host:"https://172.28.193.96:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-227000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-227000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2085e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 01:30:45.318881    6300 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 01:30:45.319621    6300 kapi.go:59] client config for multinode-227000: &rest.Config{Host:"https://172.28.193.96:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-227000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-227000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2085e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 01:30:45.320205    6300 cert_rotation.go:137] Starting client certificate rotation controller
	I0722 01:30:45.320866    6300 node_ready.go:35] waiting up to 6m0s for node "multinode-227000" to be "Ready" ...
	I0722 01:30:45.320866    6300 round_trippers.go:463] GET https://172.28.193.96:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0722 01:30:45.320866    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:45.320866    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:45.320866    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:45.320866    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:45.321546    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:45.321546    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:45.321546    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:45.347396    6300 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0722 01:30:45.347712    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:45.347712    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:45.347712    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:45 GMT
	I0722 01:30:45.347712    6300 round_trippers.go:580]     Audit-Id: 03d877e5-b3ac-4711-894f-1a989f1ca1ff
	I0722 01:30:45.347712    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:45.347712    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:45.347712    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:45.347712    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:45.356866    6300 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I0722 01:30:45.356866    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:45.356866    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:45.357643    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:45.357643    6300 round_trippers.go:580]     Content-Length: 291
	I0722 01:30:45.357643    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:45 GMT
	I0722 01:30:45.357643    6300 round_trippers.go:580]     Audit-Id: 6ca24fbb-7d87-4e60-8906-244ea50138ec
	I0722 01:30:45.357643    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:45.357643    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:45.357643    6300 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"09bb0b5f-fd60-4724-90fa-4b153aed28c3","resourceVersion":"358","creationTimestamp":"2024-07-22T01:30:30Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0722 01:30:45.358288    6300 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"09bb0b5f-fd60-4724-90fa-4b153aed28c3","resourceVersion":"358","creationTimestamp":"2024-07-22T01:30:30Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0722 01:30:45.358526    6300 round_trippers.go:463] PUT https://172.28.193.96:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0722 01:30:45.358526    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:45.358526    6300 round_trippers.go:473]     Content-Type: application/json
	I0722 01:30:45.358526    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:45.358526    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:45.377820    6300 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0722 01:30:45.377901    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:45.377901    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:45 GMT
	I0722 01:30:45.377901    6300 round_trippers.go:580]     Audit-Id: 6aeb7ecd-64de-4f87-b0d6-9c9e6f7be00a
	I0722 01:30:45.377901    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:45.377990    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:45.377990    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:45.377990    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:45.377990    6300 round_trippers.go:580]     Content-Length: 291
	I0722 01:30:45.377990    6300 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"09bb0b5f-fd60-4724-90fa-4b153aed28c3","resourceVersion":"361","creationTimestamp":"2024-07-22T01:30:30Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0722 01:30:45.827516    6300 round_trippers.go:463] GET https://172.28.193.96:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0722 01:30:45.827632    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:45.827516    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:45.827632    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:45.827632    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:45.827795    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:45.827795    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:45.828000    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:45.840991    6300 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0722 01:30:45.840991    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:45.840991    6300 round_trippers.go:580]     Audit-Id: 8be442d6-d182-4f98-8f0a-8e086fe1afdf
	I0722 01:30:45.840991    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:45.840991    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:45.840991    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:45.840991    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:45.840991    6300 round_trippers.go:580]     Content-Length: 291
	I0722 01:30:45.840991    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:45 GMT
	I0722 01:30:45.840991    6300 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0722 01:30:45.840991    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:45.840991    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:45.840991    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:45 GMT
	I0722 01:30:45.840991    6300 round_trippers.go:580]     Audit-Id: bbd014cb-ed05-45d5-a86d-7d698d42eafa
	I0722 01:30:45.840991    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:45.840991    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:45.840991    6300 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"09bb0b5f-fd60-4724-90fa-4b153aed28c3","resourceVersion":"372","creationTimestamp":"2024-07-22T01:30:30Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0722 01:30:45.840991    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:45.840991    6300 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-227000" context rescaled to 1 replicas
	I0722 01:30:45.841986    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:46.323790    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:46.323867    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:46.323867    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:46.323867    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:46.324971    6300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 01:30:46.324971    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:46.324971    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:46.324971    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:46.324971    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:46.324971    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:46 GMT
	I0722 01:30:46.324971    6300 round_trippers.go:580]     Audit-Id: 031c9b3c-9009-4d4e-bc96-968acfdad5b4
	I0722 01:30:46.324971    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:46.324971    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:46.819305    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:30:46.820223    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:30:46.821360    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:46.821460    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:46.821460    6300 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 01:30:46.821460    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:46.821460    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:46.821817    6300 kapi.go:59] client config for multinode-227000: &rest.Config{Host:"https://172.28.193.96:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-227000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-227000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2085e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 01:30:46.823278    6300 addons.go:234] Setting addon default-storageclass=true in "multinode-227000"
	I0722 01:30:46.823386    6300 host.go:66] Checking if "multinode-227000" exists ...
	I0722 01:30:46.824744    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:30:46.828929    6300 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 01:30:46.828929    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:46.828929    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:30:46.828929    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:30:46.828929    6300 round_trippers.go:580]     Audit-Id: 49a016a8-782f-46c6-b731-e49ff9acbdf3
	I0722 01:30:46.829251    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:46.829294    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:46.829294    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:46.829294    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:46.829294    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:46 GMT
	I0722 01:30:46.829294    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:46.833846    6300 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 01:30:46.837838    6300 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 01:30:46.838862    6300 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 01:30:46.838862    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:30:47.326378    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:47.326378    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:47.326378    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:47.326378    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:47.331383    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:30:47.331580    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:47.331580    6300 round_trippers.go:580]     Audit-Id: 5a49dd35-a6f4-4b0a-8f50-9afdd59100c4
	I0722 01:30:47.331580    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:47.331580    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:47.331580    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:47.331724    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:47.331724    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:47 GMT
	I0722 01:30:47.332490    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:47.333255    6300 node_ready.go:53] node "multinode-227000" has status "Ready":"False"
	I0722 01:30:47.834966    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:47.834966    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:47.834966    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:47.834966    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:47.839066    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:47.839066    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:47.839140    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:47.839140    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:47.839140    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:47.839140    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:47 GMT
	I0722 01:30:47.839140    6300 round_trippers.go:580]     Audit-Id: 38d9bdfb-974a-4ec1-929a-b8adfebc6716
	I0722 01:30:47.839140    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:47.840092    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:48.329022    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:48.329022    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:48.329022    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:48.329022    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:48.333685    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:30:48.333685    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:48.333685    6300 round_trippers.go:580]     Audit-Id: b3e85da6-b2d9-4863-be27-2018bb5eea77
	I0722 01:30:48.333685    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:48.333685    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:48.333685    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:48.333685    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:48.334701    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:48 GMT
	I0722 01:30:48.335219    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:48.835769    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:48.835769    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:48.835769    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:48.835769    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:48.840808    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:30:48.840808    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:48.840808    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:48.840808    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:48.840808    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:48 GMT
	I0722 01:30:48.840808    6300 round_trippers.go:580]     Audit-Id: 3092378c-9dff-4ef7-ac95-ce009d182b6d
	I0722 01:30:48.840808    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:48.840808    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:48.841788    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:49.324983    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:49.324983    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:49.325294    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:49.325294    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:49.328910    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:49.329030    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:49.329030    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:49.329030    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:49.329030    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:49 GMT
	I0722 01:30:49.329030    6300 round_trippers.go:580]     Audit-Id: fb43ed90-6c55-4bdc-8719-d04f7b1b653a
	I0722 01:30:49.329030    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:49.329030    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:49.329358    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:49.381319    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:30:49.382311    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:30:49.382311    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:30:49.445008    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:30:49.445586    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:30:49.445586    6300 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 01:30:49.445586    6300 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 01:30:49.445714    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:30:49.833285    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:49.833366    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:49.833366    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:49.833366    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:49.838794    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:30:49.839111    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:49.839111    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:49.839208    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:49.839208    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:49.839208    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:49.839208    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:49 GMT
	I0722 01:30:49.839302    6300 round_trippers.go:580]     Audit-Id: 4e879825-f4a0-4bfb-8493-b2f1dfc7be97
	I0722 01:30:49.839668    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:49.840267    6300 node_ready.go:53] node "multinode-227000" has status "Ready":"False"
	I0722 01:30:50.323862    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:50.323862    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:50.323862    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:50.323862    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:50.327747    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:50.328661    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:50.328661    6300 round_trippers.go:580]     Audit-Id: c762267c-88b5-4f9f-81ad-90d25854948b
	I0722 01:30:50.328661    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:50.328661    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:50.328661    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:50.328661    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:50.328661    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:50 GMT
	I0722 01:30:50.328661    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:50.830790    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:50.830881    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:50.830881    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:50.830881    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:50.835472    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:30:50.835472    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:50.835472    6300 round_trippers.go:580]     Audit-Id: 4e82a629-d447-429a-bab8-643fcd576db2
	I0722 01:30:50.835472    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:50.835472    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:50.835472    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:50.835472    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:50.835472    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:50 GMT
	I0722 01:30:50.835472    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:51.324894    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:51.324894    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:51.324987    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:51.324987    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:51.328944    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:51.328944    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:51.329016    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:51.329016    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:51.329016    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:51.329016    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:51 GMT
	I0722 01:30:51.329016    6300 round_trippers.go:580]     Audit-Id: c9f128f7-baa2-4bce-91e1-5cb2e579c261
	I0722 01:30:51.329016    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:51.329304    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:51.832363    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:51.832363    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:51.832363    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:51.832363    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:51.838108    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:30:51.838180    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:51.838209    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:51.838209    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:51.838209    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:51 GMT
	I0722 01:30:51.838209    6300 round_trippers.go:580]     Audit-Id: a463236d-9cd4-4449-bc19-08fe2937ba32
	I0722 01:30:51.838209    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:51.838209    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:51.838545    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:51.910783    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:30:51.910973    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:30:51.911057    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:30:52.287676    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:30:52.288126    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:30:52.288193    6300 sshutil.go:53] new ssh client: &{IP:172.28.193.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:30:52.321338    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:52.321338    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:52.321338    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:52.321338    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:52.324926    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:52.325257    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:52.325257    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:52.325257    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:52 GMT
	I0722 01:30:52.325257    6300 round_trippers.go:580]     Audit-Id: 6223026f-87b4-4e6b-a560-78f2a6872b46
	I0722 01:30:52.325257    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:52.325257    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:52.325257    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:52.325520    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:52.325952    6300 node_ready.go:53] node "multinode-227000" has status "Ready":"False"
	I0722 01:30:52.442322    6300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 01:30:52.832306    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:52.832306    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:52.832306    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:52.832306    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:52.837355    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:30:52.837700    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:52.837700    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:52.837700    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:52.837700    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:52.837700    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:52 GMT
	I0722 01:30:52.837830    6300 round_trippers.go:580]     Audit-Id: 06a9614e-90dc-4dff-ba4d-e4fb76036fb8
	I0722 01:30:52.837830    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:52.838178    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:53.056309    6300 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0722 01:30:53.056309    6300 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0722 01:30:53.056309    6300 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0722 01:30:53.056409    6300 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0722 01:30:53.056409    6300 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0722 01:30:53.056442    6300 command_runner.go:130] > pod/storage-provisioner created
	I0722 01:30:53.323759    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:53.323759    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:53.323759    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:53.323759    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:53.327340    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:53.327340    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:53.327340    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:53.327340    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:53 GMT
	I0722 01:30:53.327340    6300 round_trippers.go:580]     Audit-Id: c40ee36f-ef76-4900-9838-6b927c8251be
	I0722 01:30:53.327340    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:53.327340    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:53.328349    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:53.328424    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:53.830824    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:53.830824    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:53.831065    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:53.831065    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:53.838457    6300 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 01:30:53.839507    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:53.839507    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:53.839546    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:53.839546    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:53.839546    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:53.839546    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:53 GMT
	I0722 01:30:53.839546    6300 round_trippers.go:580]     Audit-Id: e4b8da62-52aa-443c-87f6-8c8f6216fd99
	I0722 01:30:53.839790    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:54.322689    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:54.322928    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:54.322928    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:54.322928    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:54.326352    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:54.326352    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:54.326352    6300 round_trippers.go:580]     Audit-Id: e4f6c3c9-d83c-4292-b16d-e7d35cbf72d4
	I0722 01:30:54.326352    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:54.326352    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:54.326352    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:54.326352    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:54.326352    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:54 GMT
	I0722 01:30:54.326864    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:54.327663    6300 node_ready.go:53] node "multinode-227000" has status "Ready":"False"
	I0722 01:30:54.680105    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:30:54.680632    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:30:54.680784    6300 sshutil.go:53] new ssh client: &{IP:172.28.193.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:30:54.812051    6300 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 01:30:54.822992    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:54.822992    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:54.822992    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:54.822992    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:54.827576    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:30:54.827576    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:54.827576    6300 round_trippers.go:580]     Audit-Id: 43b23042-332a-4dac-afce-81e03cadf597
	I0722 01:30:54.827576    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:54.827576    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:54.827576    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:54.827576    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:54.827576    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:54 GMT
	I0722 01:30:54.827576    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:55.007790    6300 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0722 01:30:55.009055    6300 round_trippers.go:463] GET https://172.28.193.96:8443/apis/storage.k8s.io/v1/storageclasses
	I0722 01:30:55.009153    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:55.009153    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:55.009153    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:55.012469    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:55.012610    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:55.012610    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:55 GMT
	I0722 01:30:55.012610    6300 round_trippers.go:580]     Audit-Id: c1941734-2897-40b4-a1d0-84544036c906
	I0722 01:30:55.012610    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:55.012610    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:55.012654    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:55.012654    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:55.012654    6300 round_trippers.go:580]     Content-Length: 1273
	I0722 01:30:55.012738    6300 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"398"},"items":[{"metadata":{"name":"standard","uid":"2d0f9618-47dc-474c-bd90-417e58748690","resourceVersion":"398","creationTimestamp":"2024-07-22T01:30:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-22T01:30:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0722 01:30:55.013531    6300 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2d0f9618-47dc-474c-bd90-417e58748690","resourceVersion":"398","creationTimestamp":"2024-07-22T01:30:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-22T01:30:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0722 01:30:55.013589    6300 round_trippers.go:463] PUT https://172.28.193.96:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0722 01:30:55.013714    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:55.013714    6300 round_trippers.go:473]     Content-Type: application/json
	I0722 01:30:55.013714    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:55.013714    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:55.017040    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:55.017040    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:55.017040    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:55.017040    6300 round_trippers.go:580]     Content-Length: 1220
	I0722 01:30:55.017040    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:55 GMT
	I0722 01:30:55.017040    6300 round_trippers.go:580]     Audit-Id: c91d77c3-fac3-4d0e-bc1f-5a3d5f51924e
	I0722 01:30:55.017040    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:55.017040    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:55.017040    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:55.017447    6300 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"2d0f9618-47dc-474c-bd90-417e58748690","resourceVersion":"398","creationTimestamp":"2024-07-22T01:30:55Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-22T01:30:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0722 01:30:55.022719    6300 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0722 01:30:55.027213    6300 addons.go:510] duration metric: took 10.723841s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0722 01:30:55.328547    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:55.328547    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:55.328547    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:55.328547    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:55.333114    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:55.333197    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:55.333197    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:55.333197    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:55 GMT
	I0722 01:30:55.333197    6300 round_trippers.go:580]     Audit-Id: 24a291c6-82c9-4aad-b8d2-bd1fe4e0dae8
	I0722 01:30:55.333197    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:55.333197    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:55.333197    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:55.334343    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:55.831586    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:55.831586    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:55.831586    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:55.831586    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:55.836152    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:30:55.836152    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:55.836739    6300 round_trippers.go:580]     Audit-Id: 4703b036-1c9f-455d-8581-2624ae21edfb
	I0722 01:30:55.836739    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:55.836739    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:55.836739    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:55.836739    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:55.836739    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:55 GMT
	I0722 01:30:55.837229    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:56.332743    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:56.332863    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:56.332863    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:56.332863    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:56.336437    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:56.336437    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:56.337149    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:56.337149    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:56.337149    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:56.337149    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:56.337149    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:56 GMT
	I0722 01:30:56.337149    6300 round_trippers.go:580]     Audit-Id: 18d71d76-175a-4633-baac-80f73bb1193d
	I0722 01:30:56.337612    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:56.337907    6300 node_ready.go:53] node "multinode-227000" has status "Ready":"False"
	I0722 01:30:56.833427    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:56.833427    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:56.833427    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:56.833427    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:56.837659    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:30:56.837659    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:56.837659    6300 round_trippers.go:580]     Audit-Id: 0a92a756-ed26-4482-9173-6a58652e0223
	I0722 01:30:56.837659    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:56.837659    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:56.837659    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:56.837659    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:56.837659    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:56 GMT
	I0722 01:30:56.838638    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:57.331019    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:57.331070    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:57.331070    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:57.331070    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:57.338628    6300 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 01:30:57.338628    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:57.338628    6300 round_trippers.go:580]     Audit-Id: 30c53a41-fd11-4c6a-beb2-5980df281b18
	I0722 01:30:57.339175    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:57.339175    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:57.339175    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:57.339175    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:57.339175    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:57 GMT
	I0722 01:30:57.339393    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:57.829485    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:57.829485    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:57.829485    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:57.829485    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:57.833312    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:57.834416    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:57.834416    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:57.834416    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:57.834416    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:57.834416    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:57 GMT
	I0722 01:30:57.834416    6300 round_trippers.go:580]     Audit-Id: 61a2f42a-b69d-4b36-b976-3a09d0638468
	I0722 01:30:57.834416    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:57.834769    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:58.328748    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:58.328748    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:58.328748    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:58.328748    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:58.332391    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:58.332391    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:58.332391    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:58.332391    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:58.332391    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:58.332391    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:58.332391    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:58 GMT
	I0722 01:30:58.333373    6300 round_trippers.go:580]     Audit-Id: d0be9c4e-b926-4e95-b4ef-28cc75e70e52
	I0722 01:30:58.333992    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:58.830595    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:58.830595    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:58.830595    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:58.830595    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:58.834179    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:30:58.834179    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:58.834179    6300 round_trippers.go:580]     Audit-Id: 307d6e20-3c00-4db3-87aa-cbeb399e4e67
	I0722 01:30:58.834179    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:58.834179    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:58.834179    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:58.834179    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:58.834832    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:58 GMT
	I0722 01:30:58.835264    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:58.835795    6300 node_ready.go:53] node "multinode-227000" has status "Ready":"False"
	I0722 01:30:59.330609    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:59.330609    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:59.330609    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:59.330609    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:59.335219    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:30:59.335219    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:59.335664    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:59.335664    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:59.335664    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:59 GMT
	I0722 01:30:59.335664    6300 round_trippers.go:580]     Audit-Id: a25389f3-f804-4c8d-91de-d1d17e812a6c
	I0722 01:30:59.335664    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:59.335664    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:59.335763    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:30:59.831696    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:30:59.831696    6300 round_trippers.go:469] Request Headers:
	I0722 01:30:59.831790    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:30:59.831790    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:30:59.836121    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:30:59.836121    6300 round_trippers.go:577] Response Headers:
	I0722 01:30:59.836121    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:30:59.836121    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:30:59 GMT
	I0722 01:30:59.836304    6300 round_trippers.go:580]     Audit-Id: 0bc9f266-9d6f-4bad-b7d0-35ab9b0f9a6a
	I0722 01:30:59.836304    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:30:59.836304    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:30:59.836304    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:30:59.836526    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:31:00.331217    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:00.331339    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:00.331376    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:00.331376    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:00.334987    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:00.334987    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:00.334987    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:00.334987    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:00 GMT
	I0722 01:31:00.335868    6300 round_trippers.go:580]     Audit-Id: 6a1a3e76-7c84-4457-87b4-29e998608bde
	I0722 01:31:00.335868    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:00.335868    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:00.335868    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:00.336681    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:31:00.829846    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:00.829846    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:00.829846    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:00.829970    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:00.835026    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:31:00.835071    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:00.835071    6300 round_trippers.go:580]     Audit-Id: ab70f904-82ae-4438-a6f2-0dcd4b2710be
	I0722 01:31:00.835071    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:00.835071    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:00.835071    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:00.835071    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:00.835071    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:00 GMT
	I0722 01:31:00.835819    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:31:00.835948    6300 node_ready.go:53] node "multinode-227000" has status "Ready":"False"
	I0722 01:31:01.330000    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:01.330371    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:01.330371    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:01.330371    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:01.334036    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:01.335015    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:01.335015    6300 round_trippers.go:580]     Audit-Id: 8927c54a-8830-474c-b653-2cf12ff52a58
	I0722 01:31:01.335052    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:01.335052    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:01.335052    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:01.335052    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:01.335052    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:01 GMT
	I0722 01:31:01.335408    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"341","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0722 01:31:01.831130    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:01.831130    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:01.831130    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:01.831130    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:01.834770    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:01.835642    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:01.835642    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:01.835642    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:01.835642    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:01.835642    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:01.835642    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:01 GMT
	I0722 01:31:01.835642    6300 round_trippers.go:580]     Audit-Id: daaf0b68-95de-496b-b3e5-98fb86d62c21
	I0722 01:31:01.835642    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"402","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0722 01:31:02.331398    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:02.331398    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:02.331398    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:02.331398    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:02.335062    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:02.335062    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:02.335062    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:02.335062    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:02 GMT
	I0722 01:31:02.335062    6300 round_trippers.go:580]     Audit-Id: aa750365-0be8-4fcb-95ba-e059432cda4a
	I0722 01:31:02.335062    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:02.335062    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:02.335062    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:02.335062    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"402","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0722 01:31:02.829600    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:02.829600    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:02.829600    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:02.829600    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:02.834242    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:31:02.834242    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:02.834242    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:02.834242    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:02.834242    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:02 GMT
	I0722 01:31:02.834242    6300 round_trippers.go:580]     Audit-Id: a856da91-d0b9-4ea2-846e-64ea2e38f04d
	I0722 01:31:02.834242    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:02.834242    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:02.835063    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"402","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0722 01:31:03.330151    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:03.330347    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:03.330415    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:03.330415    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:03.334374    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:03.334374    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:03.334374    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:03.334374    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:03.334374    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:03.334665    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:03.334665    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:03 GMT
	I0722 01:31:03.334665    6300 round_trippers.go:580]     Audit-Id: 1a8462ef-cdc8-4fd3-a64b-4395a49cdf34
	I0722 01:31:03.335968    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"402","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0722 01:31:03.336361    6300 node_ready.go:53] node "multinode-227000" has status "Ready":"False"
	I0722 01:31:03.830227    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:03.830372    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:03.830372    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:03.830372    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:03.837548    6300 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0722 01:31:03.837548    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:03.837548    6300 round_trippers.go:580]     Audit-Id: 1a07f2aa-8972-4adf-9885-33122f4dbdf4
	I0722 01:31:03.837548    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:03.837548    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:03.837548    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:03.837548    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:03.837548    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:03 GMT
	I0722 01:31:03.838294    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"402","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0722 01:31:04.327515    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:04.327606    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:04.327606    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:04.327606    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:04.331291    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:04.332424    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:04.332424    6300 round_trippers.go:580]     Audit-Id: bcf3d490-b240-4422-834a-e16153b71381
	I0722 01:31:04.332424    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:04.332424    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:04.332424    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:04.332424    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:04.332424    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:04 GMT
	I0722 01:31:04.332708    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"402","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0722 01:31:04.827565    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:04.827565    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:04.827565    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:04.827565    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:04.833673    6300 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 01:31:04.833673    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:04.833673    6300 round_trippers.go:580]     Audit-Id: 59baaf51-ae4a-4083-9e69-0c026756cc50
	I0722 01:31:04.833673    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:04.833673    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:04.833673    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:04.833673    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:04.833673    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:04 GMT
	I0722 01:31:04.834439    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"402","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0722 01:31:05.327877    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:05.327877    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:05.327877    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:05.327877    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:05.332736    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:31:05.332857    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:05.332857    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:05.332857    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:05.332857    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:05 GMT
	I0722 01:31:05.332857    6300 round_trippers.go:580]     Audit-Id: bc542344-de34-4277-8cd9-9c9bfe59ad2d
	I0722 01:31:05.332857    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:05.332857    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:05.333106    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"402","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0722 01:31:05.832879    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:05.832994    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:05.832994    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:05.832994    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:05.837683    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:31:05.837683    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:05.837683    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:05.837683    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:05.837683    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:05.837683    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:05.837683    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:05 GMT
	I0722 01:31:05.838214    6300 round_trippers.go:580]     Audit-Id: 8cf0f632-023f-415c-97cc-96c1a5cd8eb2
	I0722 01:31:05.838405    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:31:05.838941    6300 node_ready.go:49] node "multinode-227000" has status "Ready":"True"
	I0722 01:31:05.839040    6300 node_ready.go:38] duration metric: took 20.5179019s for node "multinode-227000" to be "Ready" ...
	I0722 01:31:05.839040    6300 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 01:31:05.839154    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods
	I0722 01:31:05.839154    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:05.839154    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:05.839237    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:05.847298    6300 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 01:31:05.847298    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:05.847298    6300 round_trippers.go:580]     Audit-Id: 41a63f89-d529-4b48-ad91-921524b4ef77
	I0722 01:31:05.847298    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:05.847298    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:05.847298    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:05.847298    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:05.847298    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:05 GMT
	I0722 01:31:05.849933    6300 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-6hq7s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fea9d464-87a0-47b2-bb1f-7de0dca9db23","resourceVersion":"410","creationTimestamp":"2024-07-22T01:30:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85db09f6-492e-448a-8c46-be3515d2a589","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85db09f6-492e-448a-8c46-be3515d2a589\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56337 chars]
	I0722 01:31:05.856265    6300 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6hq7s" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:05.856462    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6hq7s
	I0722 01:31:05.856462    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:05.856462    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:05.856661    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:05.860617    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:05.860692    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:05.860692    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:05.860692    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:05.860692    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:05.860692    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:05 GMT
	I0722 01:31:05.860692    6300 round_trippers.go:580]     Audit-Id: bc18010f-a594-4afb-a763-c9b795b10ddb
	I0722 01:31:05.860692    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:05.860692    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6hq7s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fea9d464-87a0-47b2-bb1f-7de0dca9db23","resourceVersion":"410","creationTimestamp":"2024-07-22T01:30:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85db09f6-492e-448a-8c46-be3515d2a589","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85db09f6-492e-448a-8c46-be3515d2a589\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0722 01:31:05.863012    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:05.863012    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:05.863012    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:05.863012    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:05.869058    6300 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 01:31:05.869156    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:05.869156    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:05.869156    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:05.869204    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:05.869204    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:05 GMT
	I0722 01:31:05.869204    6300 round_trippers.go:580]     Audit-Id: 54226402-b5ad-4312-9a3a-b8090ea42ec7
	I0722 01:31:05.869204    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:05.869411    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:31:06.369477    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6hq7s
	I0722 01:31:06.369477    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:06.369560    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:06.369560    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:06.373007    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:06.373007    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:06.373007    6300 round_trippers.go:580]     Audit-Id: 7fe97e3f-eaa7-401d-bb85-486b247ac8e8
	I0722 01:31:06.373007    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:06.373007    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:06.373007    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:06.373007    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:06.373007    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:06 GMT
	I0722 01:31:06.373007    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6hq7s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fea9d464-87a0-47b2-bb1f-7de0dca9db23","resourceVersion":"410","creationTimestamp":"2024-07-22T01:30:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85db09f6-492e-448a-8c46-be3515d2a589","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85db09f6-492e-448a-8c46-be3515d2a589\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0722 01:31:06.374013    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:06.374013    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:06.374013    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:06.374013    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:06.376997    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:31:06.376997    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:06.377322    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:06.377322    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:06.377322    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:06 GMT
	I0722 01:31:06.377322    6300 round_trippers.go:580]     Audit-Id: bd4b2c28-91c0-40db-9285-bcd8cd2413cb
	I0722 01:31:06.377322    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:06.377322    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:06.377691    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:31:06.867944    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6hq7s
	I0722 01:31:06.867944    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:06.868300    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:06.868300    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:06.871236    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:31:06.871701    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:06.871701    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:06.871701    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:06.871701    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:06.871701    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:06 GMT
	I0722 01:31:06.871701    6300 round_trippers.go:580]     Audit-Id: c26fc463-8122-417a-b8c7-2fd42648005c
	I0722 01:31:06.871701    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:06.871919    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6hq7s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fea9d464-87a0-47b2-bb1f-7de0dca9db23","resourceVersion":"410","creationTimestamp":"2024-07-22T01:30:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85db09f6-492e-448a-8c46-be3515d2a589","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85db09f6-492e-448a-8c46-be3515d2a589\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0722 01:31:06.872082    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:06.872082    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:06.872082    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:06.872082    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:06.874744    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:31:06.874744    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:06.874744    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:06.874744    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:06 GMT
	I0722 01:31:06.874744    6300 round_trippers.go:580]     Audit-Id: f40876f1-6d3e-4d2c-a1f1-27b204d789c6
	I0722 01:31:06.874744    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:06.874744    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:06.874744    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:06.875763    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:31:07.371252    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6hq7s
	I0722 01:31:07.371252    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.371252    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.371252    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.375944    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:31:07.376527    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.376527    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.376527    6300 round_trippers.go:580]     Audit-Id: 536ba4e1-94a4-47ca-84a4-fad5e89f8f8d
	I0722 01:31:07.376527    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.376527    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.376527    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.376527    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.376749    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6hq7s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fea9d464-87a0-47b2-bb1f-7de0dca9db23","resourceVersion":"410","creationTimestamp":"2024-07-22T01:30:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85db09f6-492e-448a-8c46-be3515d2a589","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85db09f6-492e-448a-8c46-be3515d2a589\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0722 01:31:07.377547    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:07.377547    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.377601    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.377601    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.387691    6300 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0722 01:31:07.387691    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.387691    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.387691    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.387691    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.387691    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.387691    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.387691    6300 round_trippers.go:580]     Audit-Id: d7e2401b-7ba2-44dd-b893-3547989e8874
	I0722 01:31:07.387691    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:31:07.858258    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6hq7s
	I0722 01:31:07.858258    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.858258    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.858343    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.862619    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:31:07.862764    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.862764    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.862764    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.862764    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.862764    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.862764    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.862764    6300 round_trippers.go:580]     Audit-Id: 6fcbf84c-04f8-41bf-910c-a8f64cc15b8d
	I0722 01:31:07.862988    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6hq7s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fea9d464-87a0-47b2-bb1f-7de0dca9db23","resourceVersion":"424","creationTimestamp":"2024-07-22T01:30:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85db09f6-492e-448a-8c46-be3515d2a589","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85db09f6-492e-448a-8c46-be3515d2a589\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0722 01:31:07.863792    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:07.863877    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.863877    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.863877    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.866148    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:31:07.866148    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.866799    6300 round_trippers.go:580]     Audit-Id: 86d87923-07bd-4c96-aa5e-2ed4d5a669c8
	I0722 01:31:07.866799    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.866799    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.866799    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.866799    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.866799    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.867016    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:31:07.867071    6300 pod_ready.go:92] pod "coredns-7db6d8ff4d-6hq7s" in "kube-system" namespace has status "Ready":"True"
	I0722 01:31:07.867071    6300 pod_ready.go:81] duration metric: took 2.0107513s for pod "coredns-7db6d8ff4d-6hq7s" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:07.867071    6300 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:07.867071    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-227000
	I0722 01:31:07.867071    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.867624    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.867624    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.869885    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:31:07.869885    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.869885    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.869885    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.869885    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.869885    6300 round_trippers.go:580]     Audit-Id: 8a4db4ef-3d1d-40b1-a254-a1e6921ec525
	I0722 01:31:07.869885    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.869885    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.870875    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-227000","namespace":"kube-system","uid":"c19bde05-9ea4-4a67-9b99-6165c66ade33","resourceVersion":"382","creationTimestamp":"2024-07-22T01:30:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.193.96:2379","kubernetes.io/config.hash":"635a26be20dd0b8ec8da52b5b98a4659","kubernetes.io/config.mirror":"635a26be20dd0b8ec8da52b5b98a4659","kubernetes.io/config.seen":"2024-07-22T01:30:30.619089190Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0722 01:31:07.872075    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:07.872075    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.872075    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.872075    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.875862    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:07.875862    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.875862    6300 round_trippers.go:580]     Audit-Id: 439b8c3f-c6c1-487d-9f05-eaeab6b67b2f
	I0722 01:31:07.875862    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.875862    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.875862    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.875862    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.875862    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.876583    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:31:07.877182    6300 pod_ready.go:92] pod "etcd-multinode-227000" in "kube-system" namespace has status "Ready":"True"
	I0722 01:31:07.877182    6300 pod_ready.go:81] duration metric: took 10.1109ms for pod "etcd-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:07.877182    6300 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:07.877182    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-227000
	I0722 01:31:07.877182    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.877182    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.877182    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.880030    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:31:07.880030    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.880030    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.880030    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.880030    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.880365    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.880365    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.880392    6300 round_trippers.go:580]     Audit-Id: b9e405ce-8ae2-40a3-a866-d2305b2c6b7f
	I0722 01:31:07.881118    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-227000","namespace":"kube-system","uid":"df64a865-3955-4a82-992b-eef0e36422ab","resourceVersion":"383","creationTimestamp":"2024-07-22T01:30:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.193.96:8443","kubernetes.io/config.hash":"1d87c68d3e1d509e27a9fa5e92fff918","kubernetes.io/config.mirror":"1d87c68d3e1d509e27a9fa5e92fff918","kubernetes.io/config.seen":"2024-07-22T01:30:22.625857454Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0722 01:31:07.881788    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:07.881788    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.881788    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.881788    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.885165    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:07.885165    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.885165    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.885165    6300 round_trippers.go:580]     Audit-Id: faf57de6-65d1-499e-a8ea-79a5a71b658b
	I0722 01:31:07.885165    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.885165    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.885165    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.885165    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.885839    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:31:07.886249    6300 pod_ready.go:92] pod "kube-apiserver-multinode-227000" in "kube-system" namespace has status "Ready":"True"
	I0722 01:31:07.886328    6300 pod_ready.go:81] duration metric: took 9.1455ms for pod "kube-apiserver-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:07.886328    6300 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:07.886421    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-227000
	I0722 01:31:07.886494    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.886494    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.886494    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.889759    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:07.889759    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.889759    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.889759    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.889914    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.889914    6300 round_trippers.go:580]     Audit-Id: 4d6c8cc9-ce71-4f95-ba4f-5d96d718a46a
	I0722 01:31:07.889914    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.889914    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.890188    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-227000","namespace":"kube-system","uid":"aba6daf9-450a-44c2-9608-9f6b86f64b3b","resourceVersion":"380","creationTimestamp":"2024-07-22T01:30:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5777eff8803f26ce696c053b191b7486","kubernetes.io/config.mirror":"5777eff8803f26ce696c053b191b7486","kubernetes.io/config.seen":"2024-07-22T01:30:22.625858454Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0722 01:31:07.890957    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:07.890957    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.890957    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.890957    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.893537    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:31:07.894542    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.894542    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.894542    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.894542    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.894542    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.894542    6300 round_trippers.go:580]     Audit-Id: 75feee15-cf4f-46df-9bc1-bf111ac4a80f
	I0722 01:31:07.894610    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.894765    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:31:07.895615    6300 pod_ready.go:92] pod "kube-controller-manager-multinode-227000" in "kube-system" namespace has status "Ready":"True"
	I0722 01:31:07.895615    6300 pod_ready.go:81] duration metric: took 9.2871ms for pod "kube-controller-manager-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:07.895615    6300 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xl6zz" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:07.895615    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xl6zz
	I0722 01:31:07.895615    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.895615    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.895615    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.901514    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:31:07.901589    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.901662    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.901662    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.901662    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.901662    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.901662    6300 round_trippers.go:580]     Audit-Id: 76484fcc-1d35-4172-81cf-0f9a925ad9d4
	I0722 01:31:07.901743    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.901743    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xl6zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"ea85e319-224a-4ceb-801e-47e309b123c2","resourceVersion":"375","creationTimestamp":"2024-07-22T01:30:43Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd58a26a-691f-4060-82de-7268a84fdfe8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd58a26a-691f-4060-82de-7268a84fdfe8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0722 01:31:07.902493    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:07.902493    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:07.902493    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:07.902493    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:07.904824    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:31:07.904824    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:07.904824    6300 round_trippers.go:580]     Audit-Id: 2924d3f5-69c3-4d99-a0c7-6eb37828456b
	I0722 01:31:07.904824    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:07.904824    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:07.904824    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:07.904824    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:07.904824    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:07 GMT
	I0722 01:31:07.904824    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:31:07.905856    6300 pod_ready.go:92] pod "kube-proxy-xl6zz" in "kube-system" namespace has status "Ready":"True"
	I0722 01:31:07.905856    6300 pod_ready.go:81] duration metric: took 10.2408ms for pod "kube-proxy-xl6zz" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:07.905856    6300 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:08.059824    6300 request.go:629] Waited for 153.7495ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-227000
	I0722 01:31:08.059943    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-227000
	I0722 01:31:08.059943    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:08.059943    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:08.059943    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:08.064393    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:31:08.065237    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:08.065237    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:08.065237    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:08.065237    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:08 GMT
	I0722 01:31:08.065237    6300 round_trippers.go:580]     Audit-Id: b266e3b3-ce94-407d-91f6-769ff6d2bfb3
	I0722 01:31:08.065237    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:08.065237    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:08.065414    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-227000","namespace":"kube-system","uid":"04abb215-da93-47b4-9876-a6f25ddb7041","resourceVersion":"381","creationTimestamp":"2024-07-22T01:30:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a40418204e092421fe09dcd13fc0d615","kubernetes.io/config.mirror":"a40418204e092421fe09dcd13fc0d615","kubernetes.io/config.seen":"2024-07-22T01:30:30.619088390Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0722 01:31:08.263164    6300 request.go:629] Waited for 196.5383ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:08.263285    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:31:08.263285    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:08.263285    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:08.263285    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:08.267436    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:31:08.268066    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:08.268066    6300 round_trippers.go:580]     Audit-Id: 750f478a-e68f-4783-ae23-59c3be38dd52
	I0722 01:31:08.268066    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:08.268066    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:08.268066    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:08.268066    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:08.268066    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:08 GMT
	I0722 01:31:08.268348    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:31:08.268348    6300 pod_ready.go:92] pod "kube-scheduler-multinode-227000" in "kube-system" namespace has status "Ready":"True"
	I0722 01:31:08.268348    6300 pod_ready.go:81] duration metric: took 362.4884ms for pod "kube-scheduler-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:31:08.268348    6300 pod_ready.go:38] duration metric: took 2.4292803s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 01:31:08.268901    6300 api_server.go:52] waiting for apiserver process to appear ...
	I0722 01:31:08.282602    6300 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 01:31:08.318068    6300 command_runner.go:130] > 2109
	I0722 01:31:08.318167    6300 api_server.go:72] duration metric: took 24.0146414s to wait for apiserver process to appear ...
	I0722 01:31:08.318234    6300 api_server.go:88] waiting for apiserver healthz status ...
	I0722 01:31:08.318234    6300 api_server.go:253] Checking apiserver healthz at https://172.28.193.96:8443/healthz ...
	I0722 01:31:08.326028    6300 api_server.go:279] https://172.28.193.96:8443/healthz returned 200:
	ok
	I0722 01:31:08.326735    6300 round_trippers.go:463] GET https://172.28.193.96:8443/version
	I0722 01:31:08.326788    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:08.326788    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:08.326788    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:08.328585    6300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 01:31:08.328585    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:08.328585    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:08.328585    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:08.328585    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:08.329142    6300 round_trippers.go:580]     Content-Length: 263
	I0722 01:31:08.329142    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:08 GMT
	I0722 01:31:08.329142    6300 round_trippers.go:580]     Audit-Id: 39219ae2-e744-4f09-8622-5ca357fdc5a7
	I0722 01:31:08.329142    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:08.329142    6300 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0722 01:31:08.329250    6300 api_server.go:141] control plane version: v1.30.3
	I0722 01:31:08.329346    6300 api_server.go:131] duration metric: took 11.1127ms to wait for apiserver health ...
	I0722 01:31:08.329346    6300 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 01:31:08.466779    6300 request.go:629] Waited for 137.1311ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods
	I0722 01:31:08.466779    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods
	I0722 01:31:08.466779    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:08.466779    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:08.467031    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:08.472879    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:31:08.472879    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:08.472879    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:08.473036    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:08.473036    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:08.473036    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:08 GMT
	I0722 01:31:08.473036    6300 round_trippers.go:580]     Audit-Id: 6e5a0d96-d0b4-4a7e-80de-88f5e9d24950
	I0722 01:31:08.473036    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:08.474402    6300 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-6hq7s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fea9d464-87a0-47b2-bb1f-7de0dca9db23","resourceVersion":"424","creationTimestamp":"2024-07-22T01:30:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85db09f6-492e-448a-8c46-be3515d2a589","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85db09f6-492e-448a-8c46-be3515d2a589\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0722 01:31:08.477695    6300 system_pods.go:59] 8 kube-system pods found
	I0722 01:31:08.477779    6300 system_pods.go:61] "coredns-7db6d8ff4d-6hq7s" [fea9d464-87a0-47b2-bb1f-7de0dca9db23] Running
	I0722 01:31:08.477779    6300 system_pods.go:61] "etcd-multinode-227000" [c19bde05-9ea4-4a67-9b99-6165c66ade33] Running
	I0722 01:31:08.477779    6300 system_pods.go:61] "kindnet-hw45n" [1dc37e94-95c4-41e3-98cf-12aaecf56a2d] Running
	I0722 01:31:08.477812    6300 system_pods.go:61] "kube-apiserver-multinode-227000" [df64a865-3955-4a82-992b-eef0e36422ab] Running
	I0722 01:31:08.477812    6300 system_pods.go:61] "kube-controller-manager-multinode-227000" [aba6daf9-450a-44c2-9608-9f6b86f64b3b] Running
	I0722 01:31:08.477812    6300 system_pods.go:61] "kube-proxy-xl6zz" [ea85e319-224a-4ceb-801e-47e309b123c2] Running
	I0722 01:31:08.477812    6300 system_pods.go:61] "kube-scheduler-multinode-227000" [04abb215-da93-47b4-9876-a6f25ddb7041] Running
	I0722 01:31:08.477812    6300 system_pods.go:61] "storage-provisioner" [b44c35fc-95d0-4d14-9976-12e41d442419] Running
	I0722 01:31:08.477812    6300 system_pods.go:74] duration metric: took 148.4638ms to wait for pod list to return data ...
	I0722 01:31:08.477908    6300 default_sa.go:34] waiting for default service account to be created ...
	I0722 01:31:08.668738    6300 request.go:629] Waited for 190.4902ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/namespaces/default/serviceaccounts
	I0722 01:31:08.668883    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/default/serviceaccounts
	I0722 01:31:08.668883    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:08.668883    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:08.668883    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:08.672899    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:08.673100    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:08.673100    6300 round_trippers.go:580]     Content-Length: 261
	I0722 01:31:08.673100    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:08 GMT
	I0722 01:31:08.673100    6300 round_trippers.go:580]     Audit-Id: e380738f-4fc1-4ecb-8696-4752f3ee8c0c
	I0722 01:31:08.673100    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:08.673100    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:08.673100    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:08.673100    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:08.673196    6300 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"3774af83-a060-4151-97ca-2df980d15af3","resourceVersion":"337","creationTimestamp":"2024-07-22T01:30:44Z"}}]}
	I0722 01:31:08.673196    6300 default_sa.go:45] found service account: "default"
	I0722 01:31:08.673196    6300 default_sa.go:55] duration metric: took 195.2865ms for default service account to be created ...
	I0722 01:31:08.673196    6300 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 01:31:08.870970    6300 request.go:629] Waited for 197.5683ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods
	I0722 01:31:08.871291    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods
	I0722 01:31:08.871291    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:08.871291    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:08.871291    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:08.877100    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:31:08.877100    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:08.877100    6300 round_trippers.go:580]     Audit-Id: 4921897e-421c-47a4-9816-6e8daf39dd99
	I0722 01:31:08.877100    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:08.877100    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:08.877286    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:08.877286    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:08.877286    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:08 GMT
	I0722 01:31:08.878662    6300 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-6hq7s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fea9d464-87a0-47b2-bb1f-7de0dca9db23","resourceVersion":"424","creationTimestamp":"2024-07-22T01:30:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85db09f6-492e-448a-8c46-be3515d2a589","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85db09f6-492e-448a-8c46-be3515d2a589\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0722 01:31:08.882932    6300 system_pods.go:86] 8 kube-system pods found
	I0722 01:31:08.883034    6300 system_pods.go:89] "coredns-7db6d8ff4d-6hq7s" [fea9d464-87a0-47b2-bb1f-7de0dca9db23] Running
	I0722 01:31:08.883034    6300 system_pods.go:89] "etcd-multinode-227000" [c19bde05-9ea4-4a67-9b99-6165c66ade33] Running
	I0722 01:31:08.883034    6300 system_pods.go:89] "kindnet-hw45n" [1dc37e94-95c4-41e3-98cf-12aaecf56a2d] Running
	I0722 01:31:08.883034    6300 system_pods.go:89] "kube-apiserver-multinode-227000" [df64a865-3955-4a82-992b-eef0e36422ab] Running
	I0722 01:31:08.883034    6300 system_pods.go:89] "kube-controller-manager-multinode-227000" [aba6daf9-450a-44c2-9608-9f6b86f64b3b] Running
	I0722 01:31:08.883200    6300 system_pods.go:89] "kube-proxy-xl6zz" [ea85e319-224a-4ceb-801e-47e309b123c2] Running
	I0722 01:31:08.883200    6300 system_pods.go:89] "kube-scheduler-multinode-227000" [04abb215-da93-47b4-9876-a6f25ddb7041] Running
	I0722 01:31:08.883200    6300 system_pods.go:89] "storage-provisioner" [b44c35fc-95d0-4d14-9976-12e41d442419] Running
	I0722 01:31:08.883200    6300 system_pods.go:126] duration metric: took 210.0016ms to wait for k8s-apps to be running ...
	I0722 01:31:08.883200    6300 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 01:31:08.894713    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 01:31:08.923365    6300 system_svc.go:56] duration metric: took 40.0245ms WaitForService to wait for kubelet
	I0722 01:31:08.923400    6300 kubeadm.go:582] duration metric: took 24.619868s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 01:31:08.923453    6300 node_conditions.go:102] verifying NodePressure condition ...
	I0722 01:31:09.072286    6300 request.go:629] Waited for 148.5537ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/nodes
	I0722 01:31:09.072393    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes
	I0722 01:31:09.072393    6300 round_trippers.go:469] Request Headers:
	I0722 01:31:09.072393    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:31:09.072393    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:31:09.075906    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:31:09.075906    6300 round_trippers.go:577] Response Headers:
	I0722 01:31:09.075906    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:31:09 GMT
	I0722 01:31:09.076636    6300 round_trippers.go:580]     Audit-Id: 4eac3f57-5a79-439f-a322-2d07b89446d4
	I0722 01:31:09.076636    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:31:09.076636    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:31:09.076636    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:31:09.076636    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:31:09.076724    6300 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"430"},"items":[{"metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0722 01:31:09.077267    6300 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 01:31:09.077425    6300 node_conditions.go:123] node cpu capacity is 2
	I0722 01:31:09.077425    6300 node_conditions.go:105] duration metric: took 153.9367ms to run NodePressure ...
	I0722 01:31:09.077425    6300 start.go:241] waiting for startup goroutines ...
	I0722 01:31:09.077425    6300 start.go:246] waiting for cluster config update ...
	I0722 01:31:09.077491    6300 start.go:255] writing updated cluster config ...
	I0722 01:31:09.084003    6300 out.go:177] 
	I0722 01:31:09.091010    6300 config.go:182] Loaded profile config "ha-474700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:31:09.092531    6300 config.go:182] Loaded profile config "multinode-227000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:31:09.093555    6300 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\config.json ...
	I0722 01:31:09.098031    6300 out.go:177] * Starting "multinode-227000-m02" worker node in "multinode-227000" cluster
	I0722 01:31:09.102030    6300 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 01:31:09.102030    6300 cache.go:56] Caching tarball of preloaded images
	I0722 01:31:09.102800    6300 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 01:31:09.103219    6300 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 01:31:09.103219    6300 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\config.json ...
	I0722 01:31:09.106821    6300 start.go:360] acquireMachinesLock for multinode-227000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 01:31:09.107842    6300 start.go:364] duration metric: took 1.0209ms to acquireMachinesLock for "multinode-227000-m02"
	I0722 01:31:09.108105    6300 start.go:93] Provisioning new machine with config: &{Name:multinode-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:multinode-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.96 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0722 01:31:09.108105    6300 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0722 01:31:09.110584    6300 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 01:31:09.110584    6300 start.go:159] libmachine.API.Create for "multinode-227000" (driver="hyperv")
	I0722 01:31:09.111415    6300 client.go:168] LocalClient.Create starting
	I0722 01:31:09.111594    6300 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0722 01:31:09.112180    6300 main.go:141] libmachine: Decoding PEM data...
	I0722 01:31:09.112180    6300 main.go:141] libmachine: Parsing certificate...
	I0722 01:31:09.112353    6300 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0722 01:31:09.112553    6300 main.go:141] libmachine: Decoding PEM data...
	I0722 01:31:09.112553    6300 main.go:141] libmachine: Parsing certificate...
	I0722 01:31:09.112553    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0722 01:31:11.199544    6300 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0722 01:31:11.199544    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:11.200404    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0722 01:31:13.098302    6300 main.go:141] libmachine: [stdout =====>] : False
	
	I0722 01:31:13.098302    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:13.098302    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 01:31:14.744922    6300 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 01:31:14.745438    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:14.745438    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 01:31:18.622992    6300 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 01:31:18.623747    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:18.626112    6300 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 01:31:19.134186    6300 main.go:141] libmachine: Creating SSH key...
	I0722 01:31:19.241314    6300 main.go:141] libmachine: Creating VM...
	I0722 01:31:19.241314    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0722 01:31:22.384183    6300 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0722 01:31:22.384183    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:22.384370    6300 main.go:141] libmachine: Using switch "Default Switch"
	I0722 01:31:22.384511    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0722 01:31:24.351532    6300 main.go:141] libmachine: [stdout =====>] : True
	
	I0722 01:31:24.352054    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:24.352054    6300 main.go:141] libmachine: Creating VHD
	I0722 01:31:24.352054    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0722 01:31:28.349246    6300 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 0E9EA0CA-4882-4DC3-A4DA-161233EFA589
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0722 01:31:28.349849    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:28.349849    6300 main.go:141] libmachine: Writing magic tar header
	I0722 01:31:28.349849    6300 main.go:141] libmachine: Writing SSH key tar header
	I0722 01:31:28.361782    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0722 01:31:31.740286    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:31:31.740286    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:31.740459    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\disk.vhd' -SizeBytes 20000MB
	I0722 01:31:34.452621    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:31:34.452621    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:34.453391    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-227000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0722 01:31:38.320906    6300 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-227000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0722 01:31:38.321232    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:38.321384    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-227000-m02 -DynamicMemoryEnabled $false
	I0722 01:31:40.750318    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:31:40.750318    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:40.751095    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-227000-m02 -Count 2
	I0722 01:31:43.118408    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:31:43.118806    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:43.118806    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-227000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\boot2docker.iso'
	I0722 01:31:45.908943    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:31:45.908943    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:45.909448    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-227000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\disk.vhd'
	I0722 01:31:48.729953    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:31:48.730445    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:48.730445    6300 main.go:141] libmachine: Starting VM...
	I0722 01:31:48.730506    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-227000-m02
	I0722 01:31:52.059266    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:31:52.059266    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:52.059266    6300 main.go:141] libmachine: Waiting for host to start...
	I0722 01:31:52.059266    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:31:54.503299    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:31:54.504136    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:54.504281    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:31:57.266998    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:31:57.266998    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:31:58.280845    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:00.665655    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:00.665655    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:00.665655    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:32:03.390971    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:32:03.391583    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:04.399200    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:06.754546    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:06.754546    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:06.754546    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:32:09.477845    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:32:09.477845    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:10.486174    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:12.925210    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:12.925210    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:12.925865    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:32:15.641932    6300 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:32:15.641932    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:16.654731    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:19.057772    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:19.057940    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:19.058010    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:32:21.847076    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:32:21.847076    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:21.847076    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:24.177795    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:24.177795    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:24.177795    6300 machine.go:94] provisionDockerMachine start ...
	I0722 01:32:24.178737    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:26.548584    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:26.548784    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:26.548956    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:32:29.255988    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:32:29.255988    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:29.262532    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:32:29.276312    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.41 22 <nil> <nil>}
	I0722 01:32:29.276312    6300 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 01:32:29.418634    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 01:32:29.418701    6300 buildroot.go:166] provisioning hostname "multinode-227000-m02"
	I0722 01:32:29.418767    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:31.737044    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:31.737119    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:31.737191    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:32:34.460393    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:32:34.460498    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:34.466226    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:32:34.466951    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.41 22 <nil> <nil>}
	I0722 01:32:34.466951    6300 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-227000-m02 && echo "multinode-227000-m02" | sudo tee /etc/hostname
	I0722 01:32:34.624620    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-227000-m02
	
	I0722 01:32:34.624735    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:36.961645    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:36.961645    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:36.962611    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:32:39.686441    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:32:39.686441    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:39.692799    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:32:39.693335    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.41 22 <nil> <nil>}
	I0722 01:32:39.693335    6300 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-227000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-227000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-227000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 01:32:39.845095    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 01:32:39.845095    6300 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0722 01:32:39.845095    6300 buildroot.go:174] setting up certificates
	I0722 01:32:39.845095    6300 provision.go:84] configureAuth start
	I0722 01:32:39.845095    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:42.136546    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:42.136546    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:42.137091    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:32:44.889811    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:32:44.889956    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:44.889956    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:47.249692    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:47.250219    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:47.250277    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:32:49.924268    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:32:49.924268    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:49.924384    6300 provision.go:143] copyHostCerts
	I0722 01:32:49.924384    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0722 01:32:49.924384    6300 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0722 01:32:49.924384    6300 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0722 01:32:49.925147    6300 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0722 01:32:49.926591    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0722 01:32:49.927124    6300 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0722 01:32:49.927124    6300 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0722 01:32:49.927423    6300 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0722 01:32:49.928647    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0722 01:32:49.929096    6300 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0722 01:32:49.929096    6300 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0722 01:32:49.929096    6300 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0722 01:32:49.930611    6300 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-227000-m02 san=[127.0.0.1 172.28.193.41 localhost minikube multinode-227000-m02]
	I0722 01:32:50.058576    6300 provision.go:177] copyRemoteCerts
	I0722 01:32:50.070595    6300 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 01:32:50.071579    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:52.373355    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:52.373751    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:52.373751    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:32:55.042100    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:32:55.042100    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:55.043297    6300 sshutil.go:53] new ssh client: &{IP:172.28.193.41 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\id_rsa Username:docker}
	I0722 01:32:55.155441    6300 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0838015s)
	I0722 01:32:55.155441    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0722 01:32:55.156058    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 01:32:55.208377    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0722 01:32:55.208838    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 01:32:55.258563    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0722 01:32:55.258889    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0722 01:32:55.318478    6300 provision.go:87] duration metric: took 15.473199s to configureAuth
	I0722 01:32:55.318534    6300 buildroot.go:189] setting minikube options for container-runtime
	I0722 01:32:55.319249    6300 config.go:182] Loaded profile config "multinode-227000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:32:55.319294    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:32:57.632345    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:32:57.633170    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:32:57.633170    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:33:00.375748    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:33:00.376262    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:00.381762    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:33:00.382436    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.41 22 <nil> <nil>}
	I0722 01:33:00.382436    6300 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 01:33:00.521044    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 01:33:00.521193    6300 buildroot.go:70] root file system type: tmpfs
	I0722 01:33:00.521452    6300 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 01:33:00.521452    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:33:02.894022    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:33:02.894714    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:02.894890    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:33:05.659503    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:33:05.659792    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:05.664965    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:33:05.665662    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.41 22 <nil> <nil>}
	I0722 01:33:05.665883    6300 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.193.96"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 01:33:05.847934    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.193.96
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 01:33:05.848471    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:33:08.218851    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:33:08.218983    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:08.218983    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:33:10.978383    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:33:10.978383    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:10.984992    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:33:10.984992    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.41 22 <nil> <nil>}
	I0722 01:33:10.985576    6300 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 01:33:13.328794    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 01:33:13.328876    6300 machine.go:97] duration metric: took 49.1504987s to provisionDockerMachine
	I0722 01:33:13.328876    6300 client.go:171] duration metric: took 2m4.2160126s to LocalClient.Create
	I0722 01:33:13.328876    6300 start.go:167] duration metric: took 2m4.2168433s to libmachine.API.Create "multinode-227000"
	I0722 01:33:13.328876    6300 start.go:293] postStartSetup for "multinode-227000-m02" (driver="hyperv")
	I0722 01:33:13.328876    6300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 01:33:13.341576    6300 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 01:33:13.341576    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:33:15.670765    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:33:15.671094    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:15.671197    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:33:18.398234    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:33:18.399303    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:18.400063    6300 sshutil.go:53] new ssh client: &{IP:172.28.193.41 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\id_rsa Username:docker}
	I0722 01:33:18.510255    6300 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1686172s)
	I0722 01:33:18.522579    6300 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 01:33:18.529253    6300 command_runner.go:130] > NAME=Buildroot
	I0722 01:33:18.529253    6300 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0722 01:33:18.529559    6300 command_runner.go:130] > ID=buildroot
	I0722 01:33:18.529559    6300 command_runner.go:130] > VERSION_ID=2023.02.9
	I0722 01:33:18.529559    6300 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0722 01:33:18.529559    6300 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 01:33:18.529641    6300 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0722 01:33:18.530459    6300 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0722 01:33:18.530787    6300 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0722 01:33:18.530787    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0722 01:33:18.543625    6300 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 01:33:18.563422    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0722 01:33:18.610785    6300 start.go:296] duration metric: took 5.2818469s for postStartSetup
	I0722 01:33:18.613652    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:33:20.962690    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:33:20.962938    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:20.962938    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:33:23.765597    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:33:23.765679    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:23.765951    6300 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\config.json ...
	I0722 01:33:23.768792    6300 start.go:128] duration metric: took 2m14.6591145s to createHost
	I0722 01:33:23.768928    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:33:26.069130    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:33:26.069130    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:26.069130    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:33:28.796416    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:33:28.797131    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:28.803017    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:33:28.803400    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.41 22 <nil> <nil>}
	I0722 01:33:28.803400    6300 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 01:33:28.936553    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721612008.954976329
	
	I0722 01:33:28.936553    6300 fix.go:216] guest clock: 1721612008.954976329
	I0722 01:33:28.936553    6300 fix.go:229] Guest: 2024-07-22 01:33:28.954976329 +0000 UTC Remote: 2024-07-22 01:33:23.7688592 +0000 UTC m=+356.925261901 (delta=5.186117129s)
	I0722 01:33:28.936759    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:33:31.224070    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:33:31.224070    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:31.224942    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:33:33.935660    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:33:33.936622    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:33.941971    6300 main.go:141] libmachine: Using SSH client type: native
	I0722 01:33:33.942590    6300 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.193.41 22 <nil> <nil>}
	I0722 01:33:33.942590    6300 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721612008
	I0722 01:33:34.099135    6300 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jul 22 01:33:28 UTC 2024
	
	I0722 01:33:34.099667    6300 fix.go:236] clock set: Mon Jul 22 01:33:28 UTC 2024
	 (err=<nil>)
	I0722 01:33:34.099667    6300 start.go:83] releasing machines lock for "multinode-227000-m02", held for 2m24.9900271s
	I0722 01:33:34.099937    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:33:36.426778    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:33:36.426778    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:36.426778    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:33:39.147388    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:33:39.147388    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:39.153389    6300 out.go:177] * Found network options:
	I0722 01:33:39.156717    6300 out.go:177]   - NO_PROXY=172.28.193.96
	W0722 01:33:39.159973    6300 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 01:33:39.162721    6300 out.go:177]   - NO_PROXY=172.28.193.96
	W0722 01:33:39.165218    6300 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 01:33:39.166558    6300 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 01:33:39.169737    6300 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0722 01:33:39.169834    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:33:39.179850    6300 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 01:33:39.179850    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:33:41.536166    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:33:41.536369    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:41.536369    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:33:41.548020    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:33:41.548020    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:41.548114    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:33:44.406713    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:33:44.407736    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:44.408067    6300 sshutil.go:53] new ssh client: &{IP:172.28.193.41 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\id_rsa Username:docker}
	I0722 01:33:44.431749    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:33:44.431749    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:44.432868    6300 sshutil.go:53] new ssh client: &{IP:172.28.193.41 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\id_rsa Username:docker}
	I0722 01:33:44.506578    6300 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0722 01:33:44.507078    6300 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.3372767s)
	W0722 01:33:44.507078    6300 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0722 01:33:44.524833    6300 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0722 01:33:44.525907    6300 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.3459925s)
	W0722 01:33:44.525907    6300 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 01:33:44.538471    6300 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 01:33:44.567968    6300 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0722 01:33:44.567968    6300 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 01:33:44.568902    6300 start.go:495] detecting cgroup driver to use...
	I0722 01:33:44.569112    6300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 01:33:44.603110    6300 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0722 01:33:44.616523    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 01:33:44.649187    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 01:33:44.671084    6300 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	W0722 01:33:44.672253    6300 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0722 01:33:44.672253    6300 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0722 01:33:44.684976    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 01:33:44.720394    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 01:33:44.752455    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 01:33:44.783707    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 01:33:44.817020    6300 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 01:33:44.848528    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 01:33:44.880115    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 01:33:44.913541    6300 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 01:33:44.946888    6300 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 01:33:44.964128    6300 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0722 01:33:44.976565    6300 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 01:33:45.009680    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:33:45.222241    6300 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 01:33:45.256409    6300 start.go:495] detecting cgroup driver to use...
	I0722 01:33:45.271510    6300 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 01:33:45.294571    6300 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0722 01:33:45.295018    6300 command_runner.go:130] > [Unit]
	I0722 01:33:45.295018    6300 command_runner.go:130] > Description=Docker Application Container Engine
	I0722 01:33:45.295018    6300 command_runner.go:130] > Documentation=https://docs.docker.com
	I0722 01:33:45.295018    6300 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0722 01:33:45.295018    6300 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0722 01:33:45.295018    6300 command_runner.go:130] > StartLimitBurst=3
	I0722 01:33:45.295018    6300 command_runner.go:130] > StartLimitIntervalSec=60
	I0722 01:33:45.295018    6300 command_runner.go:130] > [Service]
	I0722 01:33:45.295018    6300 command_runner.go:130] > Type=notify
	I0722 01:33:45.295018    6300 command_runner.go:130] > Restart=on-failure
	I0722 01:33:45.295018    6300 command_runner.go:130] > Environment=NO_PROXY=172.28.193.96
	I0722 01:33:45.295018    6300 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0722 01:33:45.295018    6300 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0722 01:33:45.295171    6300 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0722 01:33:45.295171    6300 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0722 01:33:45.295171    6300 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0722 01:33:45.295171    6300 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0722 01:33:45.295171    6300 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0722 01:33:45.295171    6300 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0722 01:33:45.295171    6300 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0722 01:33:45.295333    6300 command_runner.go:130] > ExecStart=
	I0722 01:33:45.295333    6300 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0722 01:33:45.295333    6300 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0722 01:33:45.295333    6300 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0722 01:33:45.295333    6300 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0722 01:33:45.295420    6300 command_runner.go:130] > LimitNOFILE=infinity
	I0722 01:33:45.295420    6300 command_runner.go:130] > LimitNPROC=infinity
	I0722 01:33:45.295420    6300 command_runner.go:130] > LimitCORE=infinity
	I0722 01:33:45.295420    6300 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0722 01:33:45.295420    6300 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0722 01:33:45.295420    6300 command_runner.go:130] > TasksMax=infinity
	I0722 01:33:45.295420    6300 command_runner.go:130] > TimeoutStartSec=0
	I0722 01:33:45.295512    6300 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0722 01:33:45.295512    6300 command_runner.go:130] > Delegate=yes
	I0722 01:33:45.295512    6300 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0722 01:33:45.295512    6300 command_runner.go:130] > KillMode=process
	I0722 01:33:45.295512    6300 command_runner.go:130] > [Install]
	I0722 01:33:45.295512    6300 command_runner.go:130] > WantedBy=multi-user.target
	I0722 01:33:45.313113    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 01:33:45.347059    6300 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 01:33:45.391017    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 01:33:45.429822    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 01:33:45.474230    6300 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 01:33:45.546072    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 01:33:45.570749    6300 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 01:33:45.607296    6300 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0722 01:33:45.619247    6300 ssh_runner.go:195] Run: which cri-dockerd
	I0722 01:33:45.626474    6300 command_runner.go:130] > /usr/bin/cri-dockerd
	I0722 01:33:45.639384    6300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 01:33:45.657849    6300 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 01:33:45.702314    6300 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 01:33:45.924550    6300 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 01:33:46.124374    6300 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 01:33:46.124549    6300 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 01:33:46.170280    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:33:46.387370    6300 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 01:33:49.017078    6300 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6287537s)
	I0722 01:33:49.029868    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 01:33:49.069565    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 01:33:49.107614    6300 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 01:33:49.313723    6300 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 01:33:49.526103    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:33:49.737872    6300 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 01:33:49.782454    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 01:33:49.820941    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:33:50.038594    6300 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 01:33:50.166966    6300 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 01:33:50.178827    6300 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 01:33:50.187069    6300 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0722 01:33:50.187069    6300 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0722 01:33:50.187069    6300 command_runner.go:130] > Device: 0,22	Inode: 880         Links: 1
	I0722 01:33:50.187069    6300 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0722 01:33:50.187069    6300 command_runner.go:130] > Access: 2024-07-22 01:33:50.081853854 +0000
	I0722 01:33:50.187069    6300 command_runner.go:130] > Modify: 2024-07-22 01:33:50.081853854 +0000
	I0722 01:33:50.187069    6300 command_runner.go:130] > Change: 2024-07-22 01:33:50.085853760 +0000
	I0722 01:33:50.187069    6300 command_runner.go:130] >  Birth: -
	I0722 01:33:50.187069    6300 start.go:563] Will wait 60s for crictl version
	I0722 01:33:50.197968    6300 ssh_runner.go:195] Run: which crictl
	I0722 01:33:50.204826    6300 command_runner.go:130] > /usr/bin/crictl
	I0722 01:33:50.219850    6300 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 01:33:50.272304    6300 command_runner.go:130] > Version:  0.1.0
	I0722 01:33:50.272304    6300 command_runner.go:130] > RuntimeName:  docker
	I0722 01:33:50.272304    6300 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0722 01:33:50.272304    6300 command_runner.go:130] > RuntimeApiVersion:  v1
	I0722 01:33:50.272304    6300 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 01:33:50.284317    6300 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 01:33:50.319309    6300 command_runner.go:130] > 27.0.3
	I0722 01:33:50.329348    6300 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 01:33:50.364307    6300 command_runner.go:130] > 27.0.3
	I0722 01:33:50.369600    6300 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 01:33:50.376118    6300 out.go:177]   - env NO_PROXY=172.28.193.96
	I0722 01:33:50.382008    6300 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0722 01:33:50.386013    6300 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0722 01:33:50.386013    6300 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0722 01:33:50.386013    6300 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0722 01:33:50.386013    6300 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:e8:0a:ec Flags:up|broadcast|multicast|running}
	I0722 01:33:50.389011    6300 ip.go:210] interface addr: fe80::cedd:59ec:4db2:d0bf/64
	I0722 01:33:50.389011    6300 ip.go:210] interface addr: 172.28.192.1/20
	I0722 01:33:50.401372    6300 ssh_runner.go:195] Run: grep 172.28.192.1	host.minikube.internal$ /etc/hosts
	I0722 01:33:50.408121    6300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 01:33:50.431861    6300 mustload.go:65] Loading cluster: multinode-227000
	I0722 01:33:50.432627    6300 config.go:182] Loaded profile config "multinode-227000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:33:50.433283    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:33:52.689776    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:33:52.689776    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:52.689938    6300 host.go:66] Checking if "multinode-227000" exists ...
	I0722 01:33:52.690854    6300 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000 for IP: 172.28.193.41
	I0722 01:33:52.690919    6300 certs.go:194] generating shared ca certs ...
	I0722 01:33:52.690919    6300 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 01:33:52.691277    6300 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0722 01:33:52.691918    6300 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0722 01:33:52.691918    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 01:33:52.692446    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0722 01:33:52.692690    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 01:33:52.692745    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 01:33:52.693385    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem (1338 bytes)
	W0722 01:33:52.693785    6300 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100_empty.pem, impossibly tiny 0 bytes
	I0722 01:33:52.693853    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0722 01:33:52.694271    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0722 01:33:52.694326    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0722 01:33:52.694853    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0722 01:33:52.694990    6300 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem (1708 bytes)
	I0722 01:33:52.695522    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /usr/share/ca-certificates/51002.pem
	I0722 01:33:52.695914    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 01:33:52.696075    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem -> /usr/share/ca-certificates/5100.pem
	I0722 01:33:52.696075    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 01:33:52.747385    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 01:33:52.800237    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 01:33:52.847948    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 01:33:52.897207    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /usr/share/ca-certificates/51002.pem (1708 bytes)
	I0722 01:33:52.946793    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 01:33:52.994440    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5100.pem --> /usr/share/ca-certificates/5100.pem (1338 bytes)
	I0722 01:33:53.078982    6300 ssh_runner.go:195] Run: openssl version
	I0722 01:33:53.092159    6300 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0722 01:33:53.105030    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5100.pem && ln -fs /usr/share/ca-certificates/5100.pem /etc/ssl/certs/5100.pem"
	I0722 01:33:53.140243    6300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5100.pem
	I0722 01:33:53.147624    6300 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 21 23:45 /usr/share/ca-certificates/5100.pem
	I0722 01:33:53.147711    6300 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 21 23:45 /usr/share/ca-certificates/5100.pem
	I0722 01:33:53.159377    6300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5100.pem
	I0722 01:33:53.168637    6300 command_runner.go:130] > 51391683
	I0722 01:33:53.180033    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5100.pem /etc/ssl/certs/51391683.0"
	I0722 01:33:53.210648    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/51002.pem && ln -fs /usr/share/ca-certificates/51002.pem /etc/ssl/certs/51002.pem"
	I0722 01:33:53.241648    6300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/51002.pem
	I0722 01:33:53.248757    6300 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 21 23:45 /usr/share/ca-certificates/51002.pem
	I0722 01:33:53.248757    6300 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 21 23:45 /usr/share/ca-certificates/51002.pem
	I0722 01:33:53.261443    6300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/51002.pem
	I0722 01:33:53.270556    6300 command_runner.go:130] > 3ec20f2e
	I0722 01:33:53.285193    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/51002.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 01:33:53.314931    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 01:33:53.348289    6300 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 01:33:53.355935    6300 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 21 23:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 01:33:53.355935    6300 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 21 23:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 01:33:53.368110    6300 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 01:33:53.378016    6300 command_runner.go:130] > b5213941
	I0722 01:33:53.390112    6300 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 01:33:53.428013    6300 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 01:33:53.435003    6300 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 01:33:53.435394    6300 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0722 01:33:53.435942    6300 kubeadm.go:934] updating node {m02 172.28.193.41 8443 v1.30.3 docker false true} ...
	I0722 01:33:53.436274    6300 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-227000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.193.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 01:33:53.447762    6300 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 01:33:53.467936    6300 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	I0722 01:33:53.467997    6300 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0722 01:33:53.479411    6300 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0722 01:33:53.499001    6300 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0722 01:33:53.499001    6300 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0722 01:33:53.499001    6300 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0722 01:33:53.499001    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 01:33:53.499001    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 01:33:53.512004    6300 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0722 01:33:53.513006    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 01:33:53.513006    6300 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0722 01:33:53.520398    6300 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0722 01:33:53.521027    6300 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0722 01:33:53.521027    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0722 01:33:53.568824    6300 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0722 01:33:53.568824    6300 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 01:33:53.568824    6300 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0722 01:33:53.568824    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0722 01:33:53.579831    6300 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0722 01:33:53.639585    6300 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0722 01:33:53.640521    6300 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0722 01:33:53.640689    6300 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0722 01:33:54.852922    6300 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0722 01:33:54.873967    6300 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0722 01:33:54.913222    6300 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 01:33:54.956085    6300 ssh_runner.go:195] Run: grep 172.28.193.96	control-plane.minikube.internal$ /etc/hosts
	I0722 01:33:54.962678    6300 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.193.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 01:33:54.998024    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:33:55.230669    6300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 01:33:55.265597    6300 host.go:66] Checking if "multinode-227000" exists ...
	I0722 01:33:55.266484    6300 start.go:317] joinCluster: &{Name:multinode-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.96 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.193.41 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 01:33:55.266716    6300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0722 01:33:55.266797    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:33:57.634748    6300 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:33:57.634748    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:33:57.634841    6300 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:34:00.360451    6300 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:34:00.360451    6300 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:34:00.360451    6300 sshutil.go:53] new ssh client: &{IP:172.28.193.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:34:00.573593    6300 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mk1iao.1aikw6tezavc0p8x --discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b 
	I0722 01:34:00.573691    6300 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.3069102s)
	I0722 01:34:00.573768    6300 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.28.193.41 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0722 01:34:00.573768    6300 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mk1iao.1aikw6tezavc0p8x --discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-227000-m02"
	I0722 01:34:00.818582    6300 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 01:34:02.179890    6300 command_runner.go:130] > [preflight] Running pre-flight checks
	I0722 01:34:02.179890    6300 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0722 01:34:02.179890    6300 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0722 01:34:02.179890    6300 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 01:34:02.180012    6300 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 01:34:02.180012    6300 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0722 01:34:02.180012    6300 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0722 01:34:02.180086    6300 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002003337s
	I0722 01:34:02.180086    6300 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0722 01:34:02.180086    6300 command_runner.go:130] > This node has joined the cluster:
	I0722 01:34:02.180086    6300 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0722 01:34:02.180086    6300 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0722 01:34:02.180086    6300 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0722 01:34:02.180162    6300 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mk1iao.1aikw6tezavc0p8x --discovery-token-ca-cert-hash sha256:3c01e8265c91836dbc893fe7bfccac780016dd008288beac67a844e61aa5b84b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-227000-m02": (1.606375s)
	I0722 01:34:02.180260    6300 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0722 01:34:02.398583    6300 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0722 01:34:02.613413    6300 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-227000-m02 minikube.k8s.io/updated_at=2024_07_22T01_34_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189 minikube.k8s.io/name=multinode-227000 minikube.k8s.io/primary=false
	I0722 01:34:02.760045    6300 command_runner.go:130] > node/multinode-227000-m02 labeled
	I0722 01:34:02.760282    6300 start.go:319] duration metric: took 7.4937068s to joinCluster
	I0722 01:34:02.760320    6300 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.28.193.41 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0722 01:34:02.761147    6300 config.go:182] Loaded profile config "multinode-227000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:34:02.762876    6300 out.go:177] * Verifying Kubernetes components...
	I0722 01:34:02.778768    6300 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:34:02.991948    6300 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 01:34:03.020586    6300 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 01:34:03.021397    6300 kapi.go:59] client config for multinode-227000: &rest.Config{Host:"https://172.28.193.96:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-227000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-227000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2085e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 01:34:03.022738    6300 node_ready.go:35] waiting up to 6m0s for node "multinode-227000-m02" to be "Ready" ...
	I0722 01:34:03.022932    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:03.023017    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:03.023017    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:03.023017    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:03.036612    6300 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0722 01:34:03.036612    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:03.036612    6300 round_trippers.go:580]     Audit-Id: 4c6716dc-b689-47e0-b680-57575498fb5f
	I0722 01:34:03.036612    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:03.036612    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:03.036612    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:03.036612    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:03.037099    6300 round_trippers.go:580]     Content-Length: 3920
	I0722 01:34:03.037099    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:03 GMT
	I0722 01:34:03.037140    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"595","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0722 01:34:03.534792    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:03.534918    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:03.534918    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:03.534918    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:03.536798    6300 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 01:34:03.537629    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:03.537629    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:03.537629    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:03.537629    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:03.537629    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:03.537629    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:03.537629    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:03 GMT
	I0722 01:34:03.537629    6300 round_trippers.go:580]     Audit-Id: 7cb6fe96-792f-4972-b1b9-84f938fa392f
	I0722 01:34:03.537869    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:04.036473    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:04.036739    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:04.036739    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:04.036739    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:04.040629    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:04.040629    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:04.040629    6300 round_trippers.go:580]     Audit-Id: 5ca91f2e-dcbe-431b-957c-dfebe9675a46
	I0722 01:34:04.040629    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:04.040629    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:04.040629    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:04.040908    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:04.040908    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:04.040908    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:04 GMT
	I0722 01:34:04.041062    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:04.537294    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:04.537594    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:04.537594    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:04.537594    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:04.541283    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:04.541283    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:04.541283    6300 round_trippers.go:580]     Audit-Id: 18950925-d578-4b5a-b16a-13aa26ad802c
	I0722 01:34:04.541283    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:04.541283    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:04.541283    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:04.541960    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:04.541960    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:04.541960    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:04 GMT
	I0722 01:34:04.542038    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:05.036709    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:05.036709    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:05.036821    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:05.036821    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:05.041029    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:05.041029    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:05.041162    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:05.041162    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:05 GMT
	I0722 01:34:05.041162    6300 round_trippers.go:580]     Audit-Id: 20c5ebe9-6327-4307-8760-d4dc16ace48b
	I0722 01:34:05.041162    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:05.041162    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:05.041162    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:05.041162    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:05.041403    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:05.041403    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:05.535693    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:05.535785    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:05.535785    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:05.535785    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:05.539969    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:05.539969    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:05.539969    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:05 GMT
	I0722 01:34:05.539969    6300 round_trippers.go:580]     Audit-Id: 5e87c648-d2cd-4e32-872d-65b348f9805e
	I0722 01:34:05.539969    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:05.539969    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:05.539969    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:05.540196    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:05.540196    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:05.540306    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:06.034977    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:06.035045    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:06.035045    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:06.035045    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:06.039595    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:06.039595    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:06.039665    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:06.039665    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:06 GMT
	I0722 01:34:06.039665    6300 round_trippers.go:580]     Audit-Id: cf5665a8-1809-49d5-a776-a082d42a1701
	I0722 01:34:06.039665    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:06.039665    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:06.039665    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:06.039665    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:06.039860    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:06.536701    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:06.536701    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:06.536701    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:06.536833    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:06.539741    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:34:06.539741    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:06.539741    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:06.539741    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:06 GMT
	I0722 01:34:06.539741    6300 round_trippers.go:580]     Audit-Id: 912d5c92-d003-49c9-b582-76d3f4db1671
	I0722 01:34:06.539741    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:06.539741    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:06.539741    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:06.539741    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:06.539741    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:07.032410    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:07.032482    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:07.032482    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:07.032482    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:07.036280    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:07.037270    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:07.037270    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:07.037270    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:07.037270    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:07.037270    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:07.037270    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:07 GMT
	I0722 01:34:07.037270    6300 round_trippers.go:580]     Audit-Id: 9fd146bb-e68a-47e7-b584-4a63f37c2ac2
	I0722 01:34:07.037383    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:07.037575    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:07.533785    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:07.533886    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:07.533886    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:07.533886    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:07.538269    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:07.538337    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:07.538337    6300 round_trippers.go:580]     Audit-Id: 4acb2b95-ba35-4ebd-8f16-010d861ab86c
	I0722 01:34:07.538387    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:07.538387    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:07.538387    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:07.538387    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:07.538387    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:07.538456    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:07 GMT
	I0722 01:34:07.538676    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:07.538965    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:08.033490    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:08.033782    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:08.033782    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:08.033782    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:08.039103    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:34:08.039103    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:08.039103    6300 round_trippers.go:580]     Audit-Id: 546f52ad-6042-4c78-a190-aaa2404b5296
	I0722 01:34:08.039103    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:08.039103    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:08.039103    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:08.039103    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:08.039103    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:08.039103    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:08 GMT
	I0722 01:34:08.039103    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:08.535371    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:08.535484    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:08.535484    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:08.535484    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:08.541659    6300 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 01:34:08.541713    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:08.541713    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:08.541713    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:08.541713    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:08.541713    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:08.541713    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:08.541713    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:08 GMT
	I0722 01:34:08.541713    6300 round_trippers.go:580]     Audit-Id: 85747bcb-ef13-4021-a126-3698013a0ebd
	I0722 01:34:08.541713    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:09.037224    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:09.037224    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:09.037224    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:09.037224    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:09.042283    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:34:09.042283    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:09.042283    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:09 GMT
	I0722 01:34:09.042283    6300 round_trippers.go:580]     Audit-Id: f0351aaf-0307-488e-88c0-604148048472
	I0722 01:34:09.042283    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:09.042283    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:09.042283    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:09.042441    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:09.042441    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:09.042591    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:09.526484    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:09.526484    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:09.526484    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:09.526484    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:09.532521    6300 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 01:34:09.533058    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:09.533058    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:09.533058    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:09.533058    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:09 GMT
	I0722 01:34:09.533058    6300 round_trippers.go:580]     Audit-Id: 8a0b584f-fd44-4d3f-967d-c79e87f1d1b5
	I0722 01:34:09.533058    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:09.533115    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:09.533115    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:09.533200    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:10.035994    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:10.035994    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:10.035994    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:10.035994    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:10.041023    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:34:10.041337    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:10.041337    6300 round_trippers.go:580]     Audit-Id: 56e2f7ee-8a54-4285-90fd-d6210189241c
	I0722 01:34:10.041337    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:10.041337    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:10.041337    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:10.041337    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:10.041337    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:10.041337    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:10 GMT
	I0722 01:34:10.041468    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:10.041633    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:10.527591    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:10.527717    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:10.527717    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:10.527717    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:10.532372    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:10.532372    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:10.532372    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:10.532372    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:10 GMT
	I0722 01:34:10.532372    6300 round_trippers.go:580]     Audit-Id: f9ce9aca-81a2-4c4e-b0b1-a6fb4d123a17
	I0722 01:34:10.532372    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:10.532372    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:10.532372    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:10.532372    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:10.532372    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:11.033895    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:11.033895    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:11.033895    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:11.033895    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:11.038552    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:11.038715    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:11.038715    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:11.038715    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:11.038715    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:11.038715    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:11 GMT
	I0722 01:34:11.038715    6300 round_trippers.go:580]     Audit-Id: de660627-fbdc-426d-8982-e0e44702e296
	I0722 01:34:11.038715    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:11.038715    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:11.038715    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:11.524420    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:11.524420    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:11.524420    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:11.524420    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:11.533886    6300 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 01:34:11.534007    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:11.534007    6300 round_trippers.go:580]     Audit-Id: 31b9f22d-08e5-4772-b14e-5d10f39afda4
	I0722 01:34:11.534007    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:11.534007    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:11.534007    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:11.534067    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:11.534067    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:11.534067    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:11 GMT
	I0722 01:34:11.534067    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:12.032977    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:12.032977    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:12.032977    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:12.032977    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:12.036670    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:12.036670    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:12.036670    6300 round_trippers.go:580]     Content-Length: 4029
	I0722 01:34:12.036670    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:12 GMT
	I0722 01:34:12.036670    6300 round_trippers.go:580]     Audit-Id: 3ca0b2da-ded2-4201-9a20-bb4cd5dd9a99
	I0722 01:34:12.036670    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:12.036670    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:12.036670    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:12.036670    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:12.037217    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"599","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0722 01:34:12.528756    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:12.529017    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:12.529093    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:12.529093    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:12.537786    6300 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 01:34:12.537786    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:12.537786    6300 round_trippers.go:580]     Audit-Id: 770d69ba-6451-472f-ad2a-2943fc942d3a
	I0722 01:34:12.537786    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:12.537786    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:12.537786    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:12.537786    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:12.538270    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:12 GMT
	I0722 01:34:12.539223    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:12.539735    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:13.033829    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:13.033829    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:13.033829    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:13.033829    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:13.036776    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:34:13.036776    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:13.036776    6300 round_trippers.go:580]     Audit-Id: 8bdd8292-686d-4d83-acdd-e1165b9ea5aa
	I0722 01:34:13.036776    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:13.037362    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:13.037362    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:13.037362    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:13.037362    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:13 GMT
	I0722 01:34:13.037796    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:13.534112    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:13.534112    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:13.534401    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:13.534401    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:13.658008    6300 round_trippers.go:574] Response Status: 200 OK in 123 milliseconds
	I0722 01:34:13.658505    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:13.658505    6300 round_trippers.go:580]     Audit-Id: d2533562-85c7-4664-8165-f468703dc524
	I0722 01:34:13.658505    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:13.658505    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:13.658505    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:13.658505    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:13.658505    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:13 GMT
	I0722 01:34:13.658505    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:14.023758    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:14.023796    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:14.023796    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:14.023796    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:14.027032    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:14.027032    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:14.027157    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:14.027157    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:14 GMT
	I0722 01:34:14.027157    6300 round_trippers.go:580]     Audit-Id: fb01723d-8f7f-4b90-87a2-6f26a639d44a
	I0722 01:34:14.027157    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:14.027157    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:14.027157    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:14.027328    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:14.591779    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:14.591779    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:14.591779    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:14.591779    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:14.595798    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:14.595798    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:14.595798    6300 round_trippers.go:580]     Audit-Id: e15c4b47-d267-4386-9aba-22237f469e15
	I0722 01:34:14.595798    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:14.595798    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:14.596618    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:14.596618    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:14.596618    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:14 GMT
	I0722 01:34:14.596906    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:14.597180    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:15.025698    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:15.025787    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:15.025787    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:15.025787    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:15.029228    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:15.029228    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:15.029228    6300 round_trippers.go:580]     Audit-Id: 967c4d93-9c63-4d36-8dea-cd46ff280a1c
	I0722 01:34:15.029228    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:15.029228    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:15.029228    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:15.029228    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:15.029859    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:15 GMT
	I0722 01:34:15.030166    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:15.535472    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:15.535575    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:15.535575    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:15.535575    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:15.539089    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:15.539089    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:15.539089    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:15.539089    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:15 GMT
	I0722 01:34:15.539089    6300 round_trippers.go:580]     Audit-Id: 75be71e2-f908-4d01-9ec4-11a7ee412892
	I0722 01:34:15.539089    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:15.539089    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:15.539089    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:15.540525    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:16.025783    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:16.025783    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:16.025783    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:16.025783    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:16.029355    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:16.029355    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:16.030045    6300 round_trippers.go:580]     Audit-Id: c7d716ff-43df-4b8b-9e4c-8b28d92cd76d
	I0722 01:34:16.030045    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:16.030045    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:16.030045    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:16.030045    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:16.030045    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:16 GMT
	I0722 01:34:16.030437    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:16.532597    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:16.532597    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:16.532597    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:16.532597    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:16.536567    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:16.536567    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:16.536567    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:16.536567    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:16.536875    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:16 GMT
	I0722 01:34:16.536875    6300 round_trippers.go:580]     Audit-Id: 8b0d2f1e-9038-48a4-aad6-423c5cae79f6
	I0722 01:34:16.536875    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:16.536875    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:16.537092    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:17.023689    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:17.023689    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:17.023689    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:17.023689    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:17.027281    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:17.027281    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:17.027281    6300 round_trippers.go:580]     Audit-Id: e62ea9eb-414c-47e9-a0a0-516c325f03b1
	I0722 01:34:17.027281    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:17.027281    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:17.027510    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:17.027510    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:17.027510    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:17 GMT
	I0722 01:34:17.028063    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:17.028448    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:17.532097    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:17.532153    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:17.532188    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:17.532188    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:17.535455    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:17.536299    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:17.536299    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:17.536299    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:17 GMT
	I0722 01:34:17.536299    6300 round_trippers.go:580]     Audit-Id: bcd39855-6a78-4e30-97bd-b9d1a9957a11
	I0722 01:34:17.536299    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:17.536299    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:17.536299    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:17.536538    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:18.038164    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:18.038164    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:18.038164    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:18.038164    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:18.043955    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:34:18.044451    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:18.044451    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:18.044526    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:18 GMT
	I0722 01:34:18.044526    6300 round_trippers.go:580]     Audit-Id: 946a28c1-e2a1-47b6-bf00-7ebe0ca75488
	I0722 01:34:18.044583    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:18.044583    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:18.044583    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:18.044909    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:18.535175    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:18.535449    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:18.535449    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:18.535449    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:18.539856    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:18.539856    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:18.539856    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:18.539856    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:18.539856    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:18 GMT
	I0722 01:34:18.539856    6300 round_trippers.go:580]     Audit-Id: 3a3cd399-63b2-4924-a148-a3efcb82e50c
	I0722 01:34:18.539856    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:18.539856    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:18.543027    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:19.036360    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:19.036360    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:19.036481    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:19.036481    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:19.039876    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:19.039876    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:19.039876    6300 round_trippers.go:580]     Audit-Id: 66538ca1-e535-45ce-b716-ee2027ef9bfe
	I0722 01:34:19.039876    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:19.039876    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:19.039876    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:19.039876    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:19.039876    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:19 GMT
	I0722 01:34:19.040640    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:19.040640    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:19.523586    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:19.523586    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:19.523586    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:19.523586    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:19.527636    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:19.527636    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:19.527636    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:19 GMT
	I0722 01:34:19.527636    6300 round_trippers.go:580]     Audit-Id: 1f0a2c1a-2e36-4485-9c4e-89cbe4a5ff3e
	I0722 01:34:19.527636    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:19.527636    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:19.527636    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:19.527636    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:19.528311    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:20.023531    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:20.023531    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:20.023531    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:20.023531    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:20.027169    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:20.027169    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:20.027169    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:20 GMT
	I0722 01:34:20.027169    6300 round_trippers.go:580]     Audit-Id: c5ae4264-01d4-4ed2-a5f8-c5930f2b48f2
	I0722 01:34:20.027701    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:20.027701    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:20.027701    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:20.027701    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:20.028191    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:20.526471    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:20.526584    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:20.526584    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:20.526584    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:20.530167    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:20.530167    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:20.530167    6300 round_trippers.go:580]     Audit-Id: 51e0e81c-fc8c-4d96-b3da-91580665c5d9
	I0722 01:34:20.530167    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:20.530167    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:20.530167    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:20.530769    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:20.530769    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:20 GMT
	I0722 01:34:20.531005    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:21.027867    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:21.028056    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:21.028056    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:21.028056    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:21.030614    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:34:21.030614    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:21.030614    6300 round_trippers.go:580]     Audit-Id: 57d6194c-62f3-4584-9b0e-9c5c06ba18c4
	I0722 01:34:21.030614    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:21.030614    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:21.030614    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:21.030614    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:21.030614    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:21 GMT
	I0722 01:34:21.038123    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:21.525291    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:21.525291    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:21.525291    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:21.525291    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:21.530074    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:21.530074    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:21.530074    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:21 GMT
	I0722 01:34:21.530074    6300 round_trippers.go:580]     Audit-Id: 0d8f0c77-362b-4fa4-990f-6e920de245bf
	I0722 01:34:21.530074    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:21.530074    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:21.530074    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:21.530074    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:21.530074    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:21.530840    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:22.025772    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:22.025772    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:22.025772    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:22.025772    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:22.031571    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:34:22.031752    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:22.031752    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:22.031796    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:22 GMT
	I0722 01:34:22.031796    6300 round_trippers.go:580]     Audit-Id: 53e5efbc-46e1-40d3-a275-0475455c0449
	I0722 01:34:22.031796    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:22.031796    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:22.031796    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:22.031796    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:22.525142    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:22.525142    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:22.525142    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:22.525142    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:22.531395    6300 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 01:34:22.531395    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:22.531395    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:22.531395    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:22.531395    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:22.531395    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:22.531395    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:22 GMT
	I0722 01:34:22.531395    6300 round_trippers.go:580]     Audit-Id: ff78e1fb-bea6-40dd-ab02-dadec27c1d47
	I0722 01:34:22.531395    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:23.038764    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:23.038828    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:23.038828    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:23.038828    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:23.042414    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:23.042414    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:23.042414    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:23.042414    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:23.042414    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:23.042414    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:23.042414    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:23 GMT
	I0722 01:34:23.042414    6300 round_trippers.go:580]     Audit-Id: 113d359b-3f71-4faa-af9d-0e06d8e5a79d
	I0722 01:34:23.043225    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:23.536265    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:23.536556    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:23.536556    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:23.536556    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:23.540866    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:23.541081    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:23.541081    6300 round_trippers.go:580]     Audit-Id: 3feb94d1-3120-4903-a41a-8a025b9f815e
	I0722 01:34:23.541081    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:23.541081    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:23.541081    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:23.541081    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:23.541081    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:23 GMT
	I0722 01:34:23.541410    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:23.541990    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:24.036167    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:24.036198    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:24.036249    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:24.036282    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:24.039250    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:34:24.040233    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:24.040233    6300 round_trippers.go:580]     Audit-Id: 7adb093d-9844-4e2e-9d1c-2f5120217224
	I0722 01:34:24.040233    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:24.040233    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:24.040233    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:24.040233    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:24.040233    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:24 GMT
	I0722 01:34:24.040762    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:24.534708    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:24.534947    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:24.534947    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:24.534947    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:24.538683    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:24.538683    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:24.539516    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:24 GMT
	I0722 01:34:24.539516    6300 round_trippers.go:580]     Audit-Id: 4d4f18e5-8781-4328-ac38-dfb6f2cdc325
	I0722 01:34:24.539516    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:24.539516    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:24.539516    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:24.539516    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:24.539599    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:25.034220    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:25.034220    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:25.034220    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:25.034220    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:25.038790    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:25.039337    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:25.039337    6300 round_trippers.go:580]     Audit-Id: 98d5163f-ec7b-4d64-b2a4-4b2e743322f0
	I0722 01:34:25.039337    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:25.039337    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:25.039337    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:25.039443    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:25.039443    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:25 GMT
	I0722 01:34:25.039979    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:25.536035    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:25.536035    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:25.536118    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:25.536118    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:25.540492    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:25.540492    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:25.540492    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:25.540615    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:25.540615    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:25 GMT
	I0722 01:34:25.540615    6300 round_trippers.go:580]     Audit-Id: f549481e-c790-4675-9487-e1a45ba6ae8f
	I0722 01:34:25.540615    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:25.540615    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:25.540866    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:26.039231    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:26.039231    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:26.039231    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:26.039357    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:26.043541    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:26.043541    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:26.043541    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:26.043644    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:26.043644    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:26 GMT
	I0722 01:34:26.043644    6300 round_trippers.go:580]     Audit-Id: 071b5420-22c3-441d-9dcc-bdb7035e17dc
	I0722 01:34:26.043644    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:26.043644    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:26.044152    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:26.045019    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:26.523583    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:26.523583    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:26.523583    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:26.523883    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:26.526564    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:34:26.527512    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:26.527544    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:26.527544    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:26.527544    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:26.527544    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:26.527544    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:26 GMT
	I0722 01:34:26.527544    6300 round_trippers.go:580]     Audit-Id: 60bb3eb8-1369-47cb-b734-62efb72a0372
	I0722 01:34:26.527816    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:27.024566    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:27.024566    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:27.024566    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:27.024566    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:27.027193    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:34:27.027193    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:27.027193    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:27 GMT
	I0722 01:34:27.027193    6300 round_trippers.go:580]     Audit-Id: 10c30306-d96f-48ef-9159-31bc05b56878
	I0722 01:34:27.027193    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:27.027193    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:27.028219    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:27.028309    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:27.028908    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:27.531708    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:27.531708    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:27.531708    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:27.531708    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:27.537198    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:34:27.537198    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:27.537454    6300 round_trippers.go:580]     Audit-Id: 53a62a89-a9e2-4409-a299-640091f4aff8
	I0722 01:34:27.537454    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:27.537454    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:27.537454    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:27.537454    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:27.537454    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:27 GMT
	I0722 01:34:27.538370    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:28.037937    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:28.037937    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:28.037937    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:28.037937    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:28.041761    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:28.042189    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:28.042189    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:28.042189    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:28.042189    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:28 GMT
	I0722 01:34:28.042189    6300 round_trippers.go:580]     Audit-Id: 29291ce5-a6af-4efe-8b96-5d60193ed090
	I0722 01:34:28.042189    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:28.042189    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:28.042475    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:28.532549    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:28.532619    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:28.532619    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:28.532619    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:28.536198    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:28.536198    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:28.536280    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:28.536280    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:28 GMT
	I0722 01:34:28.536280    6300 round_trippers.go:580]     Audit-Id: 39472405-d173-4c1f-beef-7734f7631bd7
	I0722 01:34:28.536280    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:28.536280    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:28.536280    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:28.536428    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:28.537259    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:29.027358    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:29.027577    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:29.027577    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:29.027577    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:29.031696    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:29.031696    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:29.031696    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:29.031696    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:29.031778    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:29.031778    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:29.031778    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:29 GMT
	I0722 01:34:29.031778    6300 round_trippers.go:580]     Audit-Id: 68a4e407-5cd8-4133-a920-851e1b5726bf
	I0722 01:34:29.032038    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:29.535651    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:29.535915    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:29.535915    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:29.535915    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:29.541326    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:34:29.542340    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:29.542340    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:29.542340    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:29.542340    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:29 GMT
	I0722 01:34:29.542340    6300 round_trippers.go:580]     Audit-Id: f4e6205e-9460-45e1-9c76-fe9af05d6017
	I0722 01:34:29.542340    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:29.542340    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:29.542735    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:30.026363    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:30.026495    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:30.026559    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:30.026559    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:30.030800    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:30.030893    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:30.030893    6300 round_trippers.go:580]     Audit-Id: 761e5138-3cfa-45b3-8db3-05e47ef050ac
	I0722 01:34:30.030893    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:30.030893    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:30.030893    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:30.030893    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:30.030893    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:30 GMT
	I0722 01:34:30.031868    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:30.532082    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:30.532146    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:30.532146    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:30.532206    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:30.536538    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:30.536538    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:30.536538    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:30.536830    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:30 GMT
	I0722 01:34:30.536830    6300 round_trippers.go:580]     Audit-Id: c7f4a766-6029-4cc1-942d-e539820a3eca
	I0722 01:34:30.536830    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:30.536830    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:30.536830    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:30.537097    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:30.537674    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:31.026598    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:31.026598    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:31.026598    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:31.026598    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:31.031844    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:34:31.031844    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:31.031844    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:31.031844    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:31.031844    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:31 GMT
	I0722 01:34:31.032164    6300 round_trippers.go:580]     Audit-Id: d39daf7d-6717-4406-9d1e-48513e979e44
	I0722 01:34:31.032164    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:31.032164    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:31.032358    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:31.529209    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:31.529209    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:31.529209    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:31.529209    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:31.532833    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:31.532833    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:31.533124    6300 round_trippers.go:580]     Audit-Id: 46eb1342-5249-4f24-b10a-ce555e8ff7fa
	I0722 01:34:31.533124    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:31.533124    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:31.533124    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:31.533124    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:31.533124    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:31 GMT
	I0722 01:34:31.533375    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:32.030265    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:32.030265    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:32.030265    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:32.030265    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:32.033868    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:32.033868    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:32.034690    6300 round_trippers.go:580]     Audit-Id: fd2afa2c-9208-4878-af74-86fc9f91f82b
	I0722 01:34:32.034690    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:32.034690    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:32.034690    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:32.034690    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:32.034690    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:32 GMT
	I0722 01:34:32.035020    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:32.530183    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:32.530322    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:32.530322    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:32.530322    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:32.534163    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:32.534163    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:32.534163    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:32.534163    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:32.534649    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:32.534649    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:32.534649    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:32 GMT
	I0722 01:34:32.534649    6300 round_trippers.go:580]     Audit-Id: 34871906-4eb8-4f6f-868e-a640868e9b40
	I0722 01:34:32.534889    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"611","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0722 01:34:33.028585    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:33.028686    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:33.028686    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:33.028686    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:33.034560    6300 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 01:34:33.035359    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:33.035359    6300 round_trippers.go:580]     Audit-Id: 3ee8eea9-d7af-49bf-b56a-7d573c68f071
	I0722 01:34:33.035359    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:33.035359    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:33.035359    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:33.035359    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:33.035359    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:33 GMT
	I0722 01:34:33.035598    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"640","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3910 chars]
	I0722 01:34:33.036244    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:33.527918    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:33.527982    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:33.527982    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:33.527982    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:33.531870    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:33.531870    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:33.531870    6300 round_trippers.go:580]     Audit-Id: a01e7f7b-afa4-4273-85a9-3076b2ec66b3
	I0722 01:34:33.531870    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:33.531870    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:33.531870    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:33.531870    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:33.531870    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:33 GMT
	I0722 01:34:33.533003    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"640","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3910 chars]
	I0722 01:34:34.024626    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:34.024626    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:34.024626    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:34.024626    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:34.029898    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:34.029898    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:34.029898    6300 round_trippers.go:580]     Audit-Id: 66627fd4-babd-4824-a706-28a1667db6a2
	I0722 01:34:34.029985    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:34.029985    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:34.029985    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:34.029985    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:34.030025    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:34 GMT
	I0722 01:34:34.030626    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"640","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3910 chars]
	I0722 01:34:34.527016    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:34.527156    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:34.527156    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:34.527156    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:34.530558    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:34.530558    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:34.530558    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:34.530558    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:34.530558    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:34.530558    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:34.530558    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:34 GMT
	I0722 01:34:34.530558    6300 round_trippers.go:580]     Audit-Id: 9b561caf-b917-4cb5-bc0a-559075ba8127
	I0722 01:34:34.530899    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"640","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3910 chars]
	I0722 01:34:35.030481    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:35.030481    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.030481    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.030481    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.035141    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:35.035141    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.035363    6300 round_trippers.go:580]     Audit-Id: 8c262b59-49fc-446b-b00e-17c9328a8c5c
	I0722 01:34:35.035363    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.035363    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.035363    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.035363    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.035363    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.035780    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"640","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3910 chars]
	I0722 01:34:35.036445    6300 node_ready.go:53] node "multinode-227000-m02" has status "Ready":"False"
	I0722 01:34:35.530783    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:35.530847    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.530847    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.530847    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.533709    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:34:35.533709    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.533709    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.533709    6300 round_trippers.go:580]     Audit-Id: ceef4881-ef33-43c5-b03a-41eb0524e375
	I0722 01:34:35.533709    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.533709    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.533709    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.533709    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.534491    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"646","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3776 chars]
	I0722 01:34:35.535020    6300 node_ready.go:49] node "multinode-227000-m02" has status "Ready":"True"
	I0722 01:34:35.535140    6300 node_ready.go:38] duration metric: took 32.5118756s for node "multinode-227000-m02" to be "Ready" ...
	I0722 01:34:35.535140    6300 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 01:34:35.535347    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods
	I0722 01:34:35.535378    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.535378    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.535378    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.543611    6300 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0722 01:34:35.543611    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.543611    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.543611    6300 round_trippers.go:580]     Audit-Id: a06327df-3fa7-4ed4-937a-519934b6dc85
	I0722 01:34:35.543611    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.543611    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.543611    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.543611    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.545376    6300 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"647"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-6hq7s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fea9d464-87a0-47b2-bb1f-7de0dca9db23","resourceVersion":"424","creationTimestamp":"2024-07-22T01:30:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85db09f6-492e-448a-8c46-be3515d2a589","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85db09f6-492e-448a-8c46-be3515d2a589\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70428 chars]
	I0722 01:34:35.549385    6300 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6hq7s" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:35.549807    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6hq7s
	I0722 01:34:35.549807    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.549807    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.549807    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.552655    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:34:35.552655    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.553192    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.553192    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.553192    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.553192    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.553192    6300 round_trippers.go:580]     Audit-Id: 7b8a4698-0e20-495a-9b29-1a8d801452c6
	I0722 01:34:35.553192    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.553748    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6hq7s","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"fea9d464-87a0-47b2-bb1f-7de0dca9db23","resourceVersion":"424","creationTimestamp":"2024-07-22T01:30:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"85db09f6-492e-448a-8c46-be3515d2a589","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85db09f6-492e-448a-8c46-be3515d2a589\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0722 01:34:35.554471    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:34:35.554471    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.554471    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.554471    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.558158    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:35.558158    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.558158    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.558158    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.558231    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.558231    6300 round_trippers.go:580]     Audit-Id: d430aec6-814b-41e0-a618-622c89c7dae1
	I0722 01:34:35.558231    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.558231    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.559090    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:34:35.559593    6300 pod_ready.go:92] pod "coredns-7db6d8ff4d-6hq7s" in "kube-system" namespace has status "Ready":"True"
	I0722 01:34:35.559593    6300 pod_ready.go:81] duration metric: took 9.9712ms for pod "coredns-7db6d8ff4d-6hq7s" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:35.559785    6300 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:35.559830    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-227000
	I0722 01:34:35.559830    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.559830    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.559830    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.562880    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:35.562880    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.562880    6300 round_trippers.go:580]     Audit-Id: 23768079-9f99-487c-ac56-c5d3dfb5e608
	I0722 01:34:35.562880    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.562880    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.562880    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.562880    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.562880    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.562880    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-227000","namespace":"kube-system","uid":"c19bde05-9ea4-4a67-9b99-6165c66ade33","resourceVersion":"382","creationTimestamp":"2024-07-22T01:30:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.193.96:2379","kubernetes.io/config.hash":"635a26be20dd0b8ec8da52b5b98a4659","kubernetes.io/config.mirror":"635a26be20dd0b8ec8da52b5b98a4659","kubernetes.io/config.seen":"2024-07-22T01:30:30.619089190Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0722 01:34:35.563916    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:34:35.563916    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.564068    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.564068    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.566402    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:34:35.566402    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.566402    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.566402    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.566402    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.566402    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.566402    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.566402    6300 round_trippers.go:580]     Audit-Id: 409e78fe-922e-40f2-bce8-942236224bdb
	I0722 01:34:35.567422    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:34:35.567885    6300 pod_ready.go:92] pod "etcd-multinode-227000" in "kube-system" namespace has status "Ready":"True"
	I0722 01:34:35.567885    6300 pod_ready.go:81] duration metric: took 8.0996ms for pod "etcd-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:35.567885    6300 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:35.567885    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-227000
	I0722 01:34:35.567885    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.567885    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.567885    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.574093    6300 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 01:34:35.574093    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.574212    6300 round_trippers.go:580]     Audit-Id: 965b59e3-5681-4528-896c-a646195511b2
	I0722 01:34:35.574212    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.574212    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.574212    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.574212    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.574212    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.575141    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-227000","namespace":"kube-system","uid":"df64a865-3955-4a82-992b-eef0e36422ab","resourceVersion":"383","creationTimestamp":"2024-07-22T01:30:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.193.96:8443","kubernetes.io/config.hash":"1d87c68d3e1d509e27a9fa5e92fff918","kubernetes.io/config.mirror":"1d87c68d3e1d509e27a9fa5e92fff918","kubernetes.io/config.seen":"2024-07-22T01:30:22.625857454Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0722 01:34:35.576375    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:34:35.576375    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.576375    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.576375    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.588755    6300 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0722 01:34:35.589795    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.589795    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.589795    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.589795    6300 round_trippers.go:580]     Audit-Id: b6057187-7691-4b3f-893b-c742777ffd8b
	I0722 01:34:35.589795    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.589795    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.589795    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.590130    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:34:35.590360    6300 pod_ready.go:92] pod "kube-apiserver-multinode-227000" in "kube-system" namespace has status "Ready":"True"
	I0722 01:34:35.590360    6300 pod_ready.go:81] duration metric: took 22.4752ms for pod "kube-apiserver-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:35.590360    6300 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:35.590360    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-227000
	I0722 01:34:35.590360    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.590360    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.590360    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.594060    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:35.594060    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.594060    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.594060    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.594227    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.594227    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.594227    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.594227    6300 round_trippers.go:580]     Audit-Id: f72b2168-3725-4cc6-86aa-194bbde95ad7
	I0722 01:34:35.595083    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-227000","namespace":"kube-system","uid":"aba6daf9-450a-44c2-9608-9f6b86f64b3b","resourceVersion":"380","creationTimestamp":"2024-07-22T01:30:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5777eff8803f26ce696c053b191b7486","kubernetes.io/config.mirror":"5777eff8803f26ce696c053b191b7486","kubernetes.io/config.seen":"2024-07-22T01:30:22.625858454Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0722 01:34:35.595459    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:34:35.595459    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.595459    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.595459    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.598086    6300 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 01:34:35.598086    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.598086    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.598086    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.598767    6300 round_trippers.go:580]     Audit-Id: 81c9b972-0863-4353-abd2-f00a8673580f
	I0722 01:34:35.598767    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.598767    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.598767    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.598872    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:34:35.599041    6300 pod_ready.go:92] pod "kube-controller-manager-multinode-227000" in "kube-system" namespace has status "Ready":"True"
	I0722 01:34:35.599041    6300 pod_ready.go:81] duration metric: took 8.6809ms for pod "kube-controller-manager-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:35.599041    6300 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phj8x" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:35.734198    6300 request.go:629] Waited for 134.97ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-proxy-phj8x
	I0722 01:34:35.734471    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-proxy-phj8x
	I0722 01:34:35.734471    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.734471    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.734471    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.741526    6300 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0722 01:34:35.741526    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.741526    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.741526    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.741526    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.741526    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.741526    6300 round_trippers.go:580]     Audit-Id: d91a66b4-0559-4b97-ace6-c1f039ef99d3
	I0722 01:34:35.741526    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.742271    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-phj8x","generateName":"kube-proxy-","namespace":"kube-system","uid":"c02bcf31-8515-4ddb-8c61-aaffc1561140","resourceVersion":"618","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd58a26a-691f-4060-82de-7268a84fdfe8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd58a26a-691f-4060-82de-7268a84fdfe8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5836 chars]
	I0722 01:34:35.936056    6300 request.go:629] Waited for 192.9703ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:35.936367    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000-m02
	I0722 01:34:35.936367    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:35.936367    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:35.936367    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:35.940217    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:35.940722    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:35.940722    6300 round_trippers.go:580]     Audit-Id: 656089ad-3d89-4a06-bfd2-e11b45bd22cd
	I0722 01:34:35.940812    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:35.940812    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:35.940812    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:35.940812    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:35.940812    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:35 GMT
	I0722 01:34:35.940812    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000-m02","uid":"65a85bfe-40ff-4fb2-a1bc-e6961de033d4","resourceVersion":"646","creationTimestamp":"2024-07-22T01:34:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_22T01_34_02_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:34:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3776 chars]
	I0722 01:34:35.941714    6300 pod_ready.go:92] pod "kube-proxy-phj8x" in "kube-system" namespace has status "Ready":"True"
	I0722 01:34:35.941714    6300 pod_ready.go:81] duration metric: took 342.669ms for pod "kube-proxy-phj8x" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:35.941714    6300 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xl6zz" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:36.139323    6300 request.go:629] Waited for 197.2516ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xl6zz
	I0722 01:34:36.139402    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xl6zz
	I0722 01:34:36.139402    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:36.139402    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:36.139402    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:36.142975    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:36.142975    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:36.142975    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:36.142975    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:36.142975    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:36.142975    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:36.142975    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:36 GMT
	I0722 01:34:36.142975    6300 round_trippers.go:580]     Audit-Id: 3e6e5669-4ee4-464a-a661-686f74370398
	I0722 01:34:36.144055    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xl6zz","generateName":"kube-proxy-","namespace":"kube-system","uid":"ea85e319-224a-4ceb-801e-47e309b123c2","resourceVersion":"375","creationTimestamp":"2024-07-22T01:30:43Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd58a26a-691f-4060-82de-7268a84fdfe8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd58a26a-691f-4060-82de-7268a84fdfe8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0722 01:34:36.342882    6300 request.go:629] Waited for 197.6634ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:34:36.342999    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:34:36.342999    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:36.342999    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:36.342999    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:36.347001    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:36.347068    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:36.347068    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:36.347068    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:36.347068    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:36 GMT
	I0722 01:34:36.347068    6300 round_trippers.go:580]     Audit-Id: b1f0ca87-0b12-45de-a473-8e8981d0c6c9
	I0722 01:34:36.347068    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:36.347068    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:36.347344    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:34:36.347662    6300 pod_ready.go:92] pod "kube-proxy-xl6zz" in "kube-system" namespace has status "Ready":"True"
	I0722 01:34:36.347662    6300 pod_ready.go:81] duration metric: took 405.9432ms for pod "kube-proxy-xl6zz" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:36.347662    6300 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:36.545434    6300 request.go:629] Waited for 197.6869ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-227000
	I0722 01:34:36.545750    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-227000
	I0722 01:34:36.545750    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:36.545750    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:36.545750    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:36.550220    6300 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 01:34:36.550220    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:36.550710    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:36.550710    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:36.550710    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:36 GMT
	I0722 01:34:36.550710    6300 round_trippers.go:580]     Audit-Id: 1c72dce8-41bd-464e-a24d-0eae63e4ac8b
	I0722 01:34:36.550710    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:36.550710    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:36.550994    6300 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-227000","namespace":"kube-system","uid":"04abb215-da93-47b4-9876-a6f25ddb7041","resourceVersion":"381","creationTimestamp":"2024-07-22T01:30:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a40418204e092421fe09dcd13fc0d615","kubernetes.io/config.mirror":"a40418204e092421fe09dcd13fc0d615","kubernetes.io/config.seen":"2024-07-22T01:30:30.619088390Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-22T01:30:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0722 01:34:36.734048    6300 request.go:629] Waited for 181.9953ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:34:36.734048    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes/multinode-227000
	I0722 01:34:36.734048    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:36.734048    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:36.734048    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:36.737688    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:36.738538    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:36.738538    6300 round_trippers.go:580]     Audit-Id: ae66eae7-b8eb-4c50-a757-f2546ed959d2
	I0722 01:34:36.738538    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:36.738538    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:36.738538    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:36.738538    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:36.738538    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:36 GMT
	I0722 01:34:36.738784    6300 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-22T01:30:27Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0722 01:34:36.739320    6300 pod_ready.go:92] pod "kube-scheduler-multinode-227000" in "kube-system" namespace has status "Ready":"True"
	I0722 01:34:36.739391    6300 pod_ready.go:81] duration metric: took 391.7239ms for pod "kube-scheduler-multinode-227000" in "kube-system" namespace to be "Ready" ...
	I0722 01:34:36.739391    6300 pod_ready.go:38] duration metric: took 1.2042368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 01:34:36.739469    6300 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 01:34:36.752746    6300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 01:34:36.779748    6300 system_svc.go:56] duration metric: took 40.2786ms WaitForService to wait for kubelet
	I0722 01:34:36.780758    6300 kubeadm.go:582] duration metric: took 34.0200236s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 01:34:36.780832    6300 node_conditions.go:102] verifying NodePressure condition ...
	I0722 01:34:36.938070    6300 request.go:629] Waited for 157.1708ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.193.96:8443/api/v1/nodes
	I0722 01:34:36.938273    6300 round_trippers.go:463] GET https://172.28.193.96:8443/api/v1/nodes
	I0722 01:34:36.938273    6300 round_trippers.go:469] Request Headers:
	I0722 01:34:36.938273    6300 round_trippers.go:473]     Accept: application/json, */*
	I0722 01:34:36.938273    6300 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0722 01:34:36.942163    6300 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 01:34:36.942163    6300 round_trippers.go:577] Response Headers:
	I0722 01:34:36.942163    6300 round_trippers.go:580]     Date: Mon, 22 Jul 2024 01:34:36 GMT
	I0722 01:34:36.942163    6300 round_trippers.go:580]     Audit-Id: 1e5d9d23-c4f2-4ce7-8444-0744b707251b
	I0722 01:34:36.942163    6300 round_trippers.go:580]     Cache-Control: no-cache, private
	I0722 01:34:36.942163    6300 round_trippers.go:580]     Content-Type: application/json
	I0722 01:34:36.942163    6300 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 65e1d0f9-e468-495a-844a-1058951f67c6
	I0722 01:34:36.942163    6300 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f74e3c79-78a9-45a2-8713-d76eb8bb16e5
	I0722 01:34:36.943027    6300 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"648"},"items":[{"metadata":{"name":"multinode-227000","uid":"84169f4d-c2c5-407d-ba48-4af56da83e09","resourceVersion":"405","creationTimestamp":"2024-07-22T01:30:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-227000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6369f37f56e44caee4b8f9e88810d0d58f35a189","minikube.k8s.io/name":"multinode-227000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_22T01_30_31_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9780 chars]
	I0722 01:34:36.944622    6300 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 01:34:36.944734    6300 node_conditions.go:123] node cpu capacity is 2
	I0722 01:34:36.944734    6300 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 01:34:36.944734    6300 node_conditions.go:123] node cpu capacity is 2
	I0722 01:34:36.944734    6300 node_conditions.go:105] duration metric: took 163.8999ms to run NodePressure ...
	I0722 01:34:36.944734    6300 start.go:241] waiting for startup goroutines ...
	I0722 01:34:36.944734    6300 start.go:255] writing updated cluster config ...
	I0722 01:34:36.957597    6300 ssh_runner.go:195] Run: rm -f paused
	I0722 01:34:37.113589    6300 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0722 01:34:37.116875    6300 out.go:177] * Done! kubectl is now configured to use "multinode-227000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.093473150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.116275795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.116582290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.116788987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.117037983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 01:31:06 multinode-227000 cri-dockerd[1333]: time="2024-07-22T01:31:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a316355b1faed06921b45542ac7bdc366c071a26fc3e0dfd62d4c449ce395126/resolv.conf as [nameserver 172.28.192.1]"
	Jul 22 01:31:06 multinode-227000 cri-dockerd[1333]: time="2024-07-22T01:31:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c12f23673a0d2a5d748ae32f014274f48e5aeb8f7d058a11feeac102ebf8e8a/resolv.conf as [nameserver 172.28.192.1]"
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.527923123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.529306802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.529427300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.529673897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.633879013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.634017511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.634037811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 01:31:06 multinode-227000 dockerd[1440]: time="2024-07-22T01:31:06.634284807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 01:35:04 multinode-227000 dockerd[1440]: time="2024-07-22T01:35:04.025264958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 01:35:04 multinode-227000 dockerd[1440]: time="2024-07-22T01:35:04.025564155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 01:35:04 multinode-227000 dockerd[1440]: time="2024-07-22T01:35:04.025592255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 01:35:04 multinode-227000 dockerd[1440]: time="2024-07-22T01:35:04.025862253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 01:35:04 multinode-227000 cri-dockerd[1333]: time="2024-07-22T01:35:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9b68c36b34798c06e180bbd0526c80e189a134a0cd6da78921df0b54b6f3e8ff/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 22 01:35:05 multinode-227000 cri-dockerd[1333]: time="2024-07-22T01:35:05Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 22 01:35:05 multinode-227000 dockerd[1440]: time="2024-07-22T01:35:05.986223624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 01:35:05 multinode-227000 dockerd[1440]: time="2024-07-22T01:35:05.986315127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 01:35:05 multinode-227000 dockerd[1440]: time="2024-07-22T01:35:05.986330727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 01:35:05 multinode-227000 dockerd[1440]: time="2024-07-22T01:35:05.986453332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3e95f6f68cda9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   51 seconds ago      Running             busybox                   0                   9b68c36b34798       busybox-fc5497c4f-tzrg5
	c4a7cf4f69cb7       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   9c12f23673a0d       coredns-7db6d8ff4d-6hq7s
	1bbf337f4ab9e       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   a316355b1faed       storage-provisioner
	4b90a865a28bf       kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a              5 minutes ago       Running             kindnet-cni               0                   4c6bc3c0c195a       kindnet-hw45n
	bc7feac549286       55bb025d2cfa5                                                                                         5 minutes ago       Running             kube-proxy                0                   bbb9a626e5f45       kube-proxy-xl6zz
	36fca4da105cb       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   399d87481e35f       etcd-multinode-227000
	39c266800d09d       3edc18e7b7672                                                                                         5 minutes ago       Running             kube-scheduler            0                   fecefb8aa7e7b       kube-scheduler-multinode-227000
	bc52df36f9351       76932a3b37d7e                                                                                         5 minutes ago       Running             kube-controller-manager   0                   3fa34b3fbe45b       kube-controller-manager-multinode-227000
	8515e6df217f1       1f6d574d502f3                                                                                         5 minutes ago       Running             kube-apiserver            0                   38a15cb59e910       kube-apiserver-multinode-227000
	
	
	==> coredns [c4a7cf4f69cb] <==
	[INFO] 10.244.1.2:50798 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000166406s
	[INFO] 10.244.0.3:37695 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196407s
	[INFO] 10.244.0.3:55765 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000178406s
	[INFO] 10.244.0.3:38321 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159006s
	[INFO] 10.244.0.3:33900 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074203s
	[INFO] 10.244.0.3:33679 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000078102s
	[INFO] 10.244.0.3:48440 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077303s
	[INFO] 10.244.0.3:38658 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176006s
	[INFO] 10.244.0.3:52960 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077902s
	[INFO] 10.244.1.2:43625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205607s
	[INFO] 10.244.1.2:59262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200207s
	[INFO] 10.244.1.2:36044 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162205s
	[INFO] 10.244.1.2:56253 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000171006s
	[INFO] 10.244.0.3:47219 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000250308s
	[INFO] 10.244.0.3:33280 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078102s
	[INFO] 10.244.0.3:47774 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086903s
	[INFO] 10.244.0.3:36768 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076002s
	[INFO] 10.244.1.2:34653 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000282009s
	[INFO] 10.244.1.2:41693 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00027601s
	[INFO] 10.244.1.2:53381 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078603s
	[INFO] 10.244.1.2:42532 - 5 "PTR IN 1.192.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000195707s
	[INFO] 10.244.0.3:54124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093003s
	[INFO] 10.244.0.3:33806 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000368612s
	[INFO] 10.244.0.3:52423 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000058701s
	[INFO] 10.244.0.3:55671 - 5 "PTR IN 1.192.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054102s
	
	
	==> describe nodes <==
	Name:               multinode-227000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-227000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=multinode-227000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T01_30_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 01:30:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-227000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 01:35:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 01:35:37 +0000   Mon, 22 Jul 2024 01:30:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 01:35:37 +0000   Mon, 22 Jul 2024 01:30:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 01:35:37 +0000   Mon, 22 Jul 2024 01:30:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 01:35:37 +0000   Mon, 22 Jul 2024 01:31:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.193.96
	  Hostname:    multinode-227000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 36837cf5f2cf4575bdc7f0bbbf4c196c
	  System UUID:                d0614c74-6e91-8a49-86f4-8cf3b3f126a1
	  Boot ID:                    f8cdfc9f-e587-4276-b0bd-7824d334a040
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tzrg5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 coredns-7db6d8ff4d-6hq7s                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m12s
	  kube-system                 etcd-multinode-227000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m26s
	  kube-system                 kindnet-hw45n                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m13s
	  kube-system                 kube-apiserver-multinode-227000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 kube-controller-manager-multinode-227000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 kube-proxy-xl6zz                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-scheduler-multinode-227000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m10s  kube-proxy       
	  Normal  Starting                 5m26s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m26s  kubelet          Node multinode-227000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s  kubelet          Node multinode-227000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s  kubelet          Node multinode-227000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m13s  node-controller  Node multinode-227000 event: Registered Node multinode-227000 in Controller
	  Normal  NodeReady                4m51s  kubelet          Node multinode-227000 status is now: NodeReady
	
	
	Name:               multinode-227000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-227000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6369f37f56e44caee4b8f9e88810d0d58f35a189
	                    minikube.k8s.io/name=multinode-227000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T01_34_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 01:34:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-227000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 01:35:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 01:35:34 +0000   Mon, 22 Jul 2024 01:34:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 01:35:34 +0000   Mon, 22 Jul 2024 01:34:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 01:35:34 +0000   Mon, 22 Jul 2024 01:34:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 01:35:34 +0000   Mon, 22 Jul 2024 01:34:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.193.41
	  Hostname:    multinode-227000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 01103748319946ad837f41f6c4635bca
	  System UUID:                d4b60fc5-d5de-ef44-b406-718beac67fb8
	  Boot ID:                    e5c93590-28a8-4d07-a8c4-7ccbab3f7a06
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5bv2m    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kindnet-5wlq8              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      115s
	  kube-system                 kube-proxy-phj8x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  115s (x2 over 115s)  kubelet          Node multinode-227000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x2 over 115s)  kubelet          Node multinode-227000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x2 over 115s)  kubelet          Node multinode-227000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           113s                 node-controller  Node multinode-227000-m02 event: Registered Node multinode-227000-m02 in Controller
	  Normal  NodeReady                81s                  kubelet          Node multinode-227000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.809067] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul22 01:29] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.143242] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +31.040088] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[  +0.097540] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.543728] systemd-fstab-generator[1049]: Ignoring "noauto" option for root device
	[  +0.183155] systemd-fstab-generator[1061]: Ignoring "noauto" option for root device
	[  +0.221592] systemd-fstab-generator[1075]: Ignoring "noauto" option for root device
	[  +2.793036] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +0.191348] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.197098] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	[Jul22 01:30] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[ +11.321956] systemd-fstab-generator[1424]: Ignoring "noauto" option for root device
	[  +0.098793] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.782327] systemd-fstab-generator[1671]: Ignoring "noauto" option for root device
	[  +6.790998] systemd-fstab-generator[1874]: Ignoring "noauto" option for root device
	[  +0.101905] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.077746] systemd-fstab-generator[2282]: Ignoring "noauto" option for root device
	[  +0.144793] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.098045] systemd-fstab-generator[2472]: Ignoring "noauto" option for root device
	[  +0.240775] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.072021] kauditd_printk_skb: 51 callbacks suppressed
	[Jul22 01:35] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [36fca4da105c] <==
	{"level":"info","ts":"2024-07-22T01:30:24.94599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81d7dfe090383c12 became leader at term 2"}
	{"level":"info","ts":"2024-07-22T01:30:24.946239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 81d7dfe090383c12 elected leader 81d7dfe090383c12 at term 2"}
	{"level":"info","ts":"2024-07-22T01:30:24.954102Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T01:30:24.960317Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"81d7dfe090383c12","local-member-attributes":"{Name:multinode-227000 ClientURLs:[https://172.28.193.96:2379]}","request-path":"/0/members/81d7dfe090383c12/attributes","cluster-id":"81b51149874e328e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T01:30:24.962017Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T01:30:24.985113Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.193.96:2379"}
	{"level":"info","ts":"2024-07-22T01:30:24.987238Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"81b51149874e328e","local-member-id":"81d7dfe090383c12","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T01:30:24.987486Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T01:30:24.987617Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T01:30:24.988101Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T01:30:24.990286Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T01:30:24.992997Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T01:30:25.005919Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T01:30:48.201184Z","caller":"traceutil/trace.go:171","msg":"trace[1360355685] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"224.689884ms","start":"2024-07-22T01:30:47.976471Z","end":"2024-07-22T01:30:48.201161Z","steps":["trace[1360355685] 'process raft request'  (duration: 224.539089ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T01:31:21.152367Z","caller":"traceutil/trace.go:171","msg":"trace[1529556203] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"126.351994ms","start":"2024-07-22T01:31:21.025995Z","end":"2024-07-22T01:31:21.152347Z","steps":["trace[1529556203] 'process raft request'  (duration: 126.251695ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T01:33:55.184663Z","caller":"traceutil/trace.go:171","msg":"trace[688522670] transaction","detail":"{read_only:false; response_revision:561; number_of_response:1; }","duration":"334.608467ms","start":"2024-07-22T01:33:54.850036Z","end":"2024-07-22T01:33:55.184645Z","steps":["trace[688522670] 'process raft request'  (duration: 334.373269ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T01:33:55.185036Z","caller":"traceutil/trace.go:171","msg":"trace[1241871301] linearizableReadLoop","detail":"{readStateIndex:614; appliedIndex:614; }","duration":"323.678262ms","start":"2024-07-22T01:33:54.861343Z","end":"2024-07-22T01:33:55.185021Z","steps":["trace[1241871301] 'read index received'  (duration: 323.672862ms)","trace[1241871301] 'applied index is now lower than readState.Index'  (duration: 4.3µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T01:33:55.185353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"323.98946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-07-22T01:33:55.185761Z","caller":"traceutil/trace.go:171","msg":"trace[675652279] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:561; }","duration":"324.430656ms","start":"2024-07-22T01:33:54.861317Z","end":"2024-07-22T01:33:55.185747Z","steps":["trace[675652279] 'agreement among raft nodes before linearized reading'  (duration: 323.871461ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T01:33:55.185986Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T01:33:54.850016Z","time spent":"334.764864ms","remote":"127.0.0.1:40470","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":553,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-227000\" mod_revision:554 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-227000\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-227000\" > >"}
	{"level":"warn","ts":"2024-07-22T01:33:55.18592Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T01:33:54.861303Z","time spent":"324.588354ms","remote":"127.0.0.1:40344","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1140,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-07-22T01:34:13.667001Z","caller":"traceutil/trace.go:171","msg":"trace[94903575] linearizableReadLoop","detail":"{readStateIndex:673; appliedIndex:672; }","duration":"118.44248ms","start":"2024-07-22T01:34:13.54854Z","end":"2024-07-22T01:34:13.666982Z","steps":["trace[94903575] 'read index received'  (duration: 118.193883ms)","trace[94903575] 'applied index is now lower than readState.Index'  (duration: 247.497µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-22T01:34:13.667413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.877376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-227000-m02\" ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2024-07-22T01:34:13.66768Z","caller":"traceutil/trace.go:171","msg":"trace[2005867422] range","detail":"{range_begin:/registry/minions/multinode-227000-m02; range_end:; response_count:1; response_revision:614; }","duration":"119.196873ms","start":"2024-07-22T01:34:13.548473Z","end":"2024-07-22T01:34:13.66767Z","steps":["trace[2005867422] 'agreement among raft nodes before linearized reading'  (duration: 118.680578ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-22T01:34:13.668325Z","caller":"traceutil/trace.go:171","msg":"trace[48205374] transaction","detail":"{read_only:false; response_revision:614; number_of_response:1; }","duration":"266.129308ms","start":"2024-07-22T01:34:13.402186Z","end":"2024-07-22T01:34:13.668315Z","steps":["trace[48205374] 'process raft request'  (duration: 264.593121ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:35:56 up 7 min,  0 users,  load average: 0.89, 0.57, 0.28
	Linux multinode-227000 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4b90a865a28b] <==
	I0722 01:34:54.103906       1 main.go:322] Node multinode-227000-m02 has CIDR [10.244.1.0/24] 
	I0722 01:35:04.111311       1 main.go:295] Handling node with IPs: map[172.28.193.96:{}]
	I0722 01:35:04.111347       1 main.go:299] handling current node
	I0722 01:35:04.111366       1 main.go:295] Handling node with IPs: map[172.28.193.41:{}]
	I0722 01:35:04.111373       1 main.go:322] Node multinode-227000-m02 has CIDR [10.244.1.0/24] 
	I0722 01:35:14.104137       1 main.go:295] Handling node with IPs: map[172.28.193.96:{}]
	I0722 01:35:14.104305       1 main.go:299] handling current node
	I0722 01:35:14.104426       1 main.go:295] Handling node with IPs: map[172.28.193.41:{}]
	I0722 01:35:14.104545       1 main.go:322] Node multinode-227000-m02 has CIDR [10.244.1.0/24] 
	I0722 01:35:24.109704       1 main.go:295] Handling node with IPs: map[172.28.193.96:{}]
	I0722 01:35:24.109827       1 main.go:299] handling current node
	I0722 01:35:24.109998       1 main.go:295] Handling node with IPs: map[172.28.193.41:{}]
	I0722 01:35:24.110490       1 main.go:322] Node multinode-227000-m02 has CIDR [10.244.1.0/24] 
	I0722 01:35:34.111592       1 main.go:295] Handling node with IPs: map[172.28.193.41:{}]
	I0722 01:35:34.111733       1 main.go:322] Node multinode-227000-m02 has CIDR [10.244.1.0/24] 
	I0722 01:35:34.112002       1 main.go:295] Handling node with IPs: map[172.28.193.96:{}]
	I0722 01:35:34.112092       1 main.go:299] handling current node
	I0722 01:35:44.108301       1 main.go:295] Handling node with IPs: map[172.28.193.96:{}]
	I0722 01:35:44.108412       1 main.go:299] handling current node
	I0722 01:35:44.108434       1 main.go:295] Handling node with IPs: map[172.28.193.41:{}]
	I0722 01:35:44.108497       1 main.go:322] Node multinode-227000-m02 has CIDR [10.244.1.0/24] 
	I0722 01:35:54.103716       1 main.go:295] Handling node with IPs: map[172.28.193.96:{}]
	I0722 01:35:54.103826       1 main.go:299] handling current node
	I0722 01:35:54.103847       1 main.go:295] Handling node with IPs: map[172.28.193.41:{}]
	I0722 01:35:54.103855       1 main.go:322] Node multinode-227000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8515e6df217f] <==
	I0722 01:30:28.037755       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0722 01:30:28.047250       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0722 01:30:28.047338       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 01:30:29.317082       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 01:30:29.408561       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 01:30:29.562757       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0722 01:30:29.579353       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.193.96]
	I0722 01:30:29.580611       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 01:30:29.588641       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 01:30:30.168534       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 01:30:30.591785       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 01:30:30.620998       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0722 01:30:30.642291       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 01:30:43.974923       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0722 01:30:44.333902       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0722 01:35:09.510072       1 conn.go:339] Error on socket receive: read tcp 172.28.193.96:8443->172.28.192.1:53700: use of closed network connection
	E0722 01:35:10.050442       1 conn.go:339] Error on socket receive: read tcp 172.28.193.96:8443->172.28.192.1:53702: use of closed network connection
	E0722 01:35:10.636311       1 conn.go:339] Error on socket receive: read tcp 172.28.193.96:8443->172.28.192.1:53704: use of closed network connection
	E0722 01:35:11.186907       1 conn.go:339] Error on socket receive: read tcp 172.28.193.96:8443->172.28.192.1:53706: use of closed network connection
	E0722 01:35:11.755444       1 conn.go:339] Error on socket receive: read tcp 172.28.193.96:8443->172.28.192.1:53708: use of closed network connection
	E0722 01:35:12.399860       1 conn.go:339] Error on socket receive: read tcp 172.28.193.96:8443->172.28.192.1:53710: use of closed network connection
	E0722 01:35:13.402700       1 conn.go:339] Error on socket receive: read tcp 172.28.193.96:8443->172.28.192.1:53714: use of closed network connection
	E0722 01:35:23.938443       1 conn.go:339] Error on socket receive: read tcp 172.28.193.96:8443->172.28.192.1:53716: use of closed network connection
	E0722 01:35:24.467361       1 conn.go:339] Error on socket receive: read tcp 172.28.193.96:8443->172.28.192.1:53719: use of closed network connection
	E0722 01:35:35.003406       1 conn.go:339] Error on socket receive: read tcp 172.28.193.96:8443->172.28.192.1:53721: use of closed network connection
	
	
	==> kube-controller-manager [bc52df36f935] <==
	I0722 01:30:44.797257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="147.374146ms"
	I0722 01:30:44.797768       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="141.995µs"
	I0722 01:30:44.803305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.697µs"
	I0722 01:30:45.553613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="120.745369ms"
	I0722 01:30:45.582328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.614474ms"
	I0722 01:30:45.582398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.298µs"
	I0722 01:31:05.475506       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="192.497µs"
	I0722 01:31:05.537191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="95.599µs"
	I0722 01:31:07.433641       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="822.887µs"
	I0722 01:31:07.491470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.98548ms"
	I0722 01:31:07.492110       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.799µs"
	I0722 01:31:08.428142       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0722 01:34:01.799580       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-227000-m02\" does not exist"
	I0722 01:34:01.859045       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-227000-m02" podCIDRs=["10.244.1.0/24"]
	I0722 01:34:03.464683       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-227000-m02"
	I0722 01:34:35.277387       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-227000-m02"
	I0722 01:35:03.427815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.608976ms"
	I0722 01:35:03.441056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.17639ms"
	I0722 01:35:03.442154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.3µs"
	I0722 01:35:03.458199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.9µs"
	I0722 01:35:03.461447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.699µs"
	I0722 01:35:06.503317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.155925ms"
	I0722 01:35:06.503777       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="134.004µs"
	I0722 01:35:06.794061       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.138417ms"
	I0722 01:35:06.794127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.801µs"
	
	
	==> kube-proxy [bc7feac54928] <==
	I0722 01:30:45.739152       1 server_linux.go:69] "Using iptables proxy"
	I0722 01:30:45.759024       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.193.96"]
	I0722 01:30:45.834591       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 01:30:45.834916       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 01:30:45.835048       1 server_linux.go:165] "Using iptables Proxier"
	I0722 01:30:45.841091       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 01:30:45.842043       1 server.go:872] "Version info" version="v1.30.3"
	I0722 01:30:45.842171       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 01:30:45.845701       1 config.go:192] "Starting service config controller"
	I0722 01:30:45.845756       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 01:30:45.845796       1 config.go:101] "Starting endpoint slice config controller"
	I0722 01:30:45.845844       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 01:30:45.847494       1 config.go:319] "Starting node config controller"
	I0722 01:30:45.847551       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 01:30:45.946068       1 shared_informer.go:320] Caches are synced for service config
	I0722 01:30:45.946176       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 01:30:45.947679       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [39c266800d09] <==
	W0722 01:30:28.303812       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 01:30:28.304051       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 01:30:28.424736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 01:30:28.424799       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0722 01:30:28.433488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 01:30:28.433537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 01:30:28.442116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 01:30:28.442345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0722 01:30:28.455251       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 01:30:28.455347       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 01:30:28.475986       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0722 01:30:28.476185       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0722 01:30:28.517255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 01:30:28.517330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 01:30:28.547729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 01:30:28.547780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 01:30:28.576045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 01:30:28.576134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 01:30:28.578080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 01:30:28.578270       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 01:30:28.630178       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 01:30:28.630213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 01:30:28.717970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 01:30:28.718793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0722 01:30:30.402181       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 01:31:30 multinode-227000 kubelet[2289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:31:30 multinode-227000 kubelet[2289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:31:30 multinode-227000 kubelet[2289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:32:30 multinode-227000 kubelet[2289]: E0722 01:32:30.757435    2289 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:32:30 multinode-227000 kubelet[2289]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:32:30 multinode-227000 kubelet[2289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:32:30 multinode-227000 kubelet[2289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:32:30 multinode-227000 kubelet[2289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:33:30 multinode-227000 kubelet[2289]: E0722 01:33:30.755521    2289 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:33:30 multinode-227000 kubelet[2289]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:33:30 multinode-227000 kubelet[2289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:33:30 multinode-227000 kubelet[2289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:33:30 multinode-227000 kubelet[2289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:34:30 multinode-227000 kubelet[2289]: E0722 01:34:30.754599    2289 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:34:30 multinode-227000 kubelet[2289]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:34:30 multinode-227000 kubelet[2289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:34:30 multinode-227000 kubelet[2289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:34:30 multinode-227000 kubelet[2289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 01:35:03 multinode-227000 kubelet[2289]: I0722 01:35:03.416922    2289 topology_manager.go:215] "Topology Admit Handler" podUID="0023554e-ed99-4159-944d-3b921a052f3b" podNamespace="default" podName="busybox-fc5497c4f-tzrg5"
	Jul 22 01:35:03 multinode-227000 kubelet[2289]: I0722 01:35:03.472807    2289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpf8j\" (UniqueName: \"kubernetes.io/projected/0023554e-ed99-4159-944d-3b921a052f3b-kube-api-access-hpf8j\") pod \"busybox-fc5497c4f-tzrg5\" (UID: \"0023554e-ed99-4159-944d-3b921a052f3b\") " pod="default/busybox-fc5497c4f-tzrg5"
	Jul 22 01:35:30 multinode-227000 kubelet[2289]: E0722 01:35:30.753761    2289 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 01:35:30 multinode-227000 kubelet[2289]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 01:35:30 multinode-227000 kubelet[2289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 01:35:30 multinode-227000 kubelet[2289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 01:35:30 multinode-227000 kubelet[2289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:35:48.103251    9824 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-227000 -n multinode-227000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-227000 -n multinode-227000: (13.0236705s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-227000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (59.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (285.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-227000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-227000
E0722 01:52:35.651251    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-227000: (1m36.9815857s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-227000 --wait=true -v=8 --alsologtostderr
E0722 01:54:11.948557    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 01:54:32.431324    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-227000 --wait=true -v=8 --alsologtostderr: exit status 90 (2m56.5242202s)

                                                
                                                
-- stdout --
	* [multinode-227000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-227000" primary control-plane node in "multinode-227000" cluster
	* Restarting existing hyperv VM for "multinode-227000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:53:16.637004    5136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0722 01:53:16.715136    5136 out.go:291] Setting OutFile to fd 860 ...
	I0722 01:53:16.715136    5136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:53:16.715136    5136 out.go:304] Setting ErrFile to fd 768...
	I0722 01:53:16.715136    5136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:53:16.745000    5136 out.go:298] Setting JSON to false
	I0722 01:53:16.749853    5136 start.go:129] hostinfo: {"hostname":"minikube6","uptime":128404,"bootTime":1721484792,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0722 01:53:16.749915    5136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 01:53:16.895104    5136 out.go:177] * [multinode-227000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0722 01:53:16.939927    5136 notify.go:220] Checking for updates...
	I0722 01:53:16.956530    5136 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 01:53:16.968368    5136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 01:53:16.995575    5136 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0722 01:53:17.009517    5136 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 01:53:17.022893    5136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 01:53:17.028482    5136 config.go:182] Loaded profile config "multinode-227000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:53:17.030146    5136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 01:53:22.645364    5136 out.go:177] * Using the hyperv driver based on existing profile
	I0722 01:53:22.656313    5136 start.go:297] selected driver: hyperv
	I0722 01:53:22.663415    5136 start.go:901] validating driver "hyperv" against &{Name:multinode-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.3 ClusterName:multinode-227000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.96 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.193.41 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.193.243 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 01:53:22.663815    5136 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 01:53:22.720965    5136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 01:53:22.721045    5136 cni.go:84] Creating CNI manager for ""
	I0722 01:53:22.721045    5136 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 01:53:22.721284    5136 start.go:340] cluster config:
	{Name:multinode-227000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-227000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.193.96 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.193.41 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.193.243 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisione
r:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 01:53:22.721284    5136 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 01:53:22.750376    5136 out.go:177] * Starting "multinode-227000" primary control-plane node in "multinode-227000" cluster
	I0722 01:53:22.801018    5136 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 01:53:22.801240    5136 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0722 01:53:22.801240    5136 cache.go:56] Caching tarball of preloaded images
	I0722 01:53:22.801575    5136 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 01:53:22.801575    5136 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 01:53:22.801575    5136 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\config.json ...
	I0722 01:53:22.805695    5136 start.go:360] acquireMachinesLock for multinode-227000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 01:53:22.806114    5136 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-227000"
	I0722 01:53:22.806114    5136 start.go:96] Skipping create...Using existing machine configuration
	I0722 01:53:22.806114    5136 fix.go:54] fixHost starting: 
	I0722 01:53:22.806114    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:53:25.505347    5136 main.go:141] libmachine: [stdout =====>] : Off
	
	I0722 01:53:25.505347    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:25.517250    5136 fix.go:112] recreateIfNeeded on multinode-227000: state=Stopped err=<nil>
	W0722 01:53:25.517250    5136 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 01:53:25.632765    5136 out.go:177] * Restarting existing hyperv VM for "multinode-227000" ...
	I0722 01:53:25.648427    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-227000
	I0722 01:53:28.675633    5136 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:53:28.680038    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:28.680038    5136 main.go:141] libmachine: Waiting for host to start...
	I0722 01:53:28.680148    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:53:30.889207    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:53:30.889207    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:30.900156    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:53:33.410539    5136 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:53:33.410539    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:34.425647    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:53:36.615122    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:53:36.615317    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:36.615396    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:53:39.086704    5136 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:53:39.092872    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:40.108937    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:53:42.310295    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:53:42.315992    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:42.315992    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:53:44.863092    5136 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:53:44.863092    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:45.877681    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:53:48.076983    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:53:48.089371    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:48.089371    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:53:50.623900    5136 main.go:141] libmachine: [stdout =====>] : 
	I0722 01:53:50.627997    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:51.639402    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:53:53.832460    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:53:53.832460    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:53.843408    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:53:56.376332    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:53:56.376332    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:56.391658    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:53:58.470876    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:53:58.470876    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:53:58.482808    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:01.003931    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:01.003931    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:01.017597    5136 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-227000\config.json ...
	I0722 01:54:01.020031    5136 machine.go:94] provisionDockerMachine start ...
	I0722 01:54:01.020031    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:03.147570    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:03.147570    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:03.150966    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:05.665401    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:05.667124    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:05.671956    5136 main.go:141] libmachine: Using SSH client type: native
	I0722 01:54:05.672153    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.192.184 22 <nil> <nil>}
	I0722 01:54:05.672153    5136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 01:54:05.802989    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 01:54:05.802989    5136 buildroot.go:166] provisioning hostname "multinode-227000"
	I0722 01:54:05.802989    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:07.905532    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:07.916706    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:07.917037    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:10.478074    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:10.478074    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:10.494940    5136 main.go:141] libmachine: Using SSH client type: native
	I0722 01:54:10.495725    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.192.184 22 <nil> <nil>}
	I0722 01:54:10.495725    5136 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-227000 && echo "multinode-227000" | sudo tee /etc/hostname
	I0722 01:54:10.667268    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-227000
	
	I0722 01:54:10.667268    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:12.777531    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:12.777531    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:12.787243    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:15.259255    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:15.271013    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:15.276677    5136 main.go:141] libmachine: Using SSH client type: native
	I0722 01:54:15.277555    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.192.184 22 <nil> <nil>}
	I0722 01:54:15.277555    5136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-227000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-227000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-227000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 01:54:15.423532    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 01:54:15.423532    5136 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0722 01:54:15.423532    5136 buildroot.go:174] setting up certificates
	I0722 01:54:15.423532    5136 provision.go:84] configureAuth start
	I0722 01:54:15.427946    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:17.535394    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:17.535394    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:17.547068    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:20.036915    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:20.036915    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:20.037605    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:22.142224    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:22.142224    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:22.150511    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:24.634407    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:24.634407    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:24.645897    5136 provision.go:143] copyHostCerts
	I0722 01:54:24.646218    5136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0722 01:54:24.646218    5136 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0722 01:54:24.646218    5136 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0722 01:54:24.647041    5136 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0722 01:54:24.647870    5136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0722 01:54:24.648529    5136 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0722 01:54:24.648529    5136 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0722 01:54:24.649074    5136 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0722 01:54:24.650213    5136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0722 01:54:24.650566    5136 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0722 01:54:24.650634    5136 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0722 01:54:24.650634    5136 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0722 01:54:24.652071    5136 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-227000 san=[127.0.0.1 172.28.192.184 localhost minikube multinode-227000]
	I0722 01:54:24.756639    5136 provision.go:177] copyRemoteCerts
	I0722 01:54:24.770739    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 01:54:24.770739    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:26.852696    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:26.852696    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:26.861771    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:29.349181    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:29.349181    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:29.360983    5136 sshutil.go:53] new ssh client: &{IP:172.28.192.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:54:29.475482    5136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7045268s)
	I0722 01:54:29.475482    5136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0722 01:54:29.476015    5136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 01:54:29.522555    5136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0722 01:54:29.523383    5136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0722 01:54:29.566480    5136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0722 01:54:29.566480    5136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 01:54:29.603107    5136 provision.go:87] duration metric: took 14.1793991s to configureAuth
	I0722 01:54:29.603107    5136 buildroot.go:189] setting minikube options for container-runtime
	I0722 01:54:29.610550    5136 config.go:182] Loaded profile config "multinode-227000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:54:29.610550    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:31.697488    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:31.697488    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:31.708789    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:34.225326    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:34.235633    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:34.240447    5136 main.go:141] libmachine: Using SSH client type: native
	I0722 01:54:34.241194    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.192.184 22 <nil> <nil>}
	I0722 01:54:34.241194    5136 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 01:54:34.374439    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 01:54:34.374439    5136 buildroot.go:70] root file system type: tmpfs
	I0722 01:54:34.374439    5136 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 01:54:34.374973    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:36.474502    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:36.474502    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:36.485798    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:39.023639    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:39.034876    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:39.043084    5136 main.go:141] libmachine: Using SSH client type: native
	I0722 01:54:39.044780    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.192.184 22 <nil> <nil>}
	I0722 01:54:39.045652    5136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 01:54:39.206930    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 01:54:39.206930    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:41.325010    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:41.325010    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:41.336489    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:43.773900    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:43.784867    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:43.792121    5136 main.go:141] libmachine: Using SSH client type: native
	I0722 01:54:43.792121    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.192.184 22 <nil> <nil>}
	I0722 01:54:43.792121    5136 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 01:54:46.287702    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 01:54:46.287912    5136 machine.go:97] duration metric: took 45.2673198s to provisionDockerMachine
	I0722 01:54:46.287912    5136 start.go:293] postStartSetup for "multinode-227000" (driver="hyperv")
	I0722 01:54:46.287979    5136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 01:54:46.301559    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 01:54:46.301559    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:48.354573    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:48.365097    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:48.365332    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:50.885217    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:50.885340    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:50.885340    5136 sshutil.go:53] new ssh client: &{IP:172.28.192.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:54:50.992969    5136 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6913524s)
	I0722 01:54:51.003715    5136 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 01:54:51.012244    5136 command_runner.go:130] > NAME=Buildroot
	I0722 01:54:51.012244    5136 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0722 01:54:51.012244    5136 command_runner.go:130] > ID=buildroot
	I0722 01:54:51.012244    5136 command_runner.go:130] > VERSION_ID=2023.02.9
	I0722 01:54:51.012370    5136 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0722 01:54:51.012421    5136 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 01:54:51.012540    5136 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0722 01:54:51.012600    5136 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0722 01:54:51.013936    5136 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> 51002.pem in /etc/ssl/certs
	I0722 01:54:51.013936    5136 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem -> /etc/ssl/certs/51002.pem
	I0722 01:54:51.026204    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 01:54:51.047905    5136 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\51002.pem --> /etc/ssl/certs/51002.pem (1708 bytes)
	I0722 01:54:51.093248    5136 start.go:296] duration metric: took 4.8052086s for postStartSetup
	I0722 01:54:51.093248    5136 fix.go:56] duration metric: took 1m28.2860393s for fixHost
	I0722 01:54:51.093248    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:53.148679    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:53.161068    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:53.161271    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:54:55.620248    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:54:55.620248    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:55.637969    5136 main.go:141] libmachine: Using SSH client type: native
	I0722 01:54:55.638105    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.192.184 22 <nil> <nil>}
	I0722 01:54:55.638644    5136 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 01:54:55.768035    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721613295.791042218
	
	I0722 01:54:55.768035    5136 fix.go:216] guest clock: 1721613295.791042218
	I0722 01:54:55.768035    5136 fix.go:229] Guest: 2024-07-22 01:54:55.791042218 +0000 UTC Remote: 2024-07-22 01:54:51.093248 +0000 UTC m=+94.544578301 (delta=4.697794218s)
	I0722 01:54:55.768224    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:54:57.856821    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:54:57.857444    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:54:57.857663    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:55:00.328967    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:55:00.341083    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:55:00.345220    5136 main.go:141] libmachine: Using SSH client type: native
	I0722 01:55:00.346341    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbdaa40] 0xbdd620 <nil>  [] 0s} 172.28.192.184 22 <nil> <nil>}
	I0722 01:55:00.346341    5136 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721613295
	I0722 01:55:00.489068    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jul 22 01:54:55 UTC 2024
	
	I0722 01:55:00.489068    5136 fix.go:236] clock set: Mon Jul 22 01:54:55 UTC 2024
	 (err=<nil>)
	I0722 01:55:00.489068    5136 start.go:83] releasing machines lock for "multinode-227000", held for 1m37.6817431s
	I0722 01:55:00.489068    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:55:02.623511    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:55:02.625326    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:55:02.625326    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:55:05.110795    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:55:05.122660    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:55:05.127740    5136 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0722 01:55:05.127740    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:55:05.139010    5136 ssh_runner.go:195] Run: cat /version.json
	I0722 01:55:05.139010    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:55:07.313070    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:55:07.313070    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:55:07.313070    5136 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:55:07.321832    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:55:07.321895    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:55:07.321985    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:55:09.979239    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:55:09.979239    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:55:09.984114    5136 sshutil.go:53] new ssh client: &{IP:172.28.192.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:55:10.005185    5136 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:55:10.006328    5136 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:55:10.006328    5136 sshutil.go:53] new ssh client: &{IP:172.28.192.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:55:10.074265    5136 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0722 01:55:10.074265    5136 ssh_runner.go:235] Completed: cat /version.json: (4.9351941s)
	I0722 01:55:10.088204    5136 ssh_runner.go:195] Run: systemctl --version
	I0722 01:55:10.093839    5136 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0722 01:55:10.093993    5136 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9660371s)
	W0722 01:55:10.094143    5136 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0722 01:55:10.101936    5136 command_runner.go:130] > systemd 252 (252)
	I0722 01:55:10.101936    5136 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0722 01:55:10.109913    5136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 01:55:10.115575    5136 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0722 01:55:10.121811    5136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 01:55:10.134018    5136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 01:55:10.163585    5136 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0722 01:55:10.163661    5136 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 01:55:10.163661    5136 start.go:495] detecting cgroup driver to use...
	I0722 01:55:10.163978    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 01:55:10.197967    5136 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W0722 01:55:10.209748    5136 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0722 01:55:10.210391    5136 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0722 01:55:10.212275    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 01:55:10.242879    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 01:55:10.263408    5136 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 01:55:10.274186    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 01:55:10.305916    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 01:55:10.337631    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 01:55:10.372747    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 01:55:10.403733    5136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 01:55:10.439315    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 01:55:10.471057    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 01:55:10.499822    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 01:55:10.530841    5136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 01:55:10.547774    5136 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0722 01:55:10.559111    5136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 01:55:10.588399    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:55:10.786846    5136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 01:55:10.820010    5136 start.go:495] detecting cgroup driver to use...
	I0722 01:55:10.835253    5136 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 01:55:10.860680    5136 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0722 01:55:10.860745    5136 command_runner.go:130] > [Unit]
	I0722 01:55:10.860745    5136 command_runner.go:130] > Description=Docker Application Container Engine
	I0722 01:55:10.860745    5136 command_runner.go:130] > Documentation=https://docs.docker.com
	I0722 01:55:10.860745    5136 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0722 01:55:10.860745    5136 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0722 01:55:10.860745    5136 command_runner.go:130] > StartLimitBurst=3
	I0722 01:55:10.860745    5136 command_runner.go:130] > StartLimitIntervalSec=60
	I0722 01:55:10.860745    5136 command_runner.go:130] > [Service]
	I0722 01:55:10.860745    5136 command_runner.go:130] > Type=notify
	I0722 01:55:10.860745    5136 command_runner.go:130] > Restart=on-failure
	I0722 01:55:10.860745    5136 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0722 01:55:10.860745    5136 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0722 01:55:10.860745    5136 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0722 01:55:10.860745    5136 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0722 01:55:10.860745    5136 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0722 01:55:10.860745    5136 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0722 01:55:10.860745    5136 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0722 01:55:10.860745    5136 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0722 01:55:10.860745    5136 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0722 01:55:10.860745    5136 command_runner.go:130] > ExecStart=
	I0722 01:55:10.860745    5136 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0722 01:55:10.860745    5136 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0722 01:55:10.860745    5136 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0722 01:55:10.860745    5136 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0722 01:55:10.860745    5136 command_runner.go:130] > LimitNOFILE=infinity
	I0722 01:55:10.860745    5136 command_runner.go:130] > LimitNPROC=infinity
	I0722 01:55:10.860745    5136 command_runner.go:130] > LimitCORE=infinity
	I0722 01:55:10.860745    5136 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0722 01:55:10.860745    5136 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0722 01:55:10.860745    5136 command_runner.go:130] > TasksMax=infinity
	I0722 01:55:10.860745    5136 command_runner.go:130] > TimeoutStartSec=0
	I0722 01:55:10.860745    5136 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0722 01:55:10.860745    5136 command_runner.go:130] > Delegate=yes
	I0722 01:55:10.860745    5136 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0722 01:55:10.860745    5136 command_runner.go:130] > KillMode=process
	I0722 01:55:10.860745    5136 command_runner.go:130] > [Install]
	I0722 01:55:10.860745    5136 command_runner.go:130] > WantedBy=multi-user.target
	I0722 01:55:10.873614    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 01:55:10.906353    5136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 01:55:10.951096    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 01:55:10.986978    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 01:55:11.022055    5136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 01:55:11.090603    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 01:55:11.113701    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 01:55:11.149208    5136 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0722 01:55:11.162444    5136 ssh_runner.go:195] Run: which cri-dockerd
	I0722 01:55:11.165156    5136 command_runner.go:130] > /usr/bin/cri-dockerd
	I0722 01:55:11.181459    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 01:55:11.196950    5136 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 01:55:11.240267    5136 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 01:55:11.434810    5136 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 01:55:11.602700    5136 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 01:55:11.602700    5136 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 01:55:11.643831    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 01:55:11.846611    5136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 01:56:12.942512    5136 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0722 01:56:12.958095    5136 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0722 01:56:12.958388    5136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1110196s)
	I0722 01:56:12.970179    5136 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0722 01:56:12.993919    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 systemd[1]: Starting Docker Application Container Engine...
	I0722 01:56:12.994424    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[653]: time="2024-07-22T01:54:44.350627305Z" level=info msg="Starting up"
	I0722 01:56:12.994424    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[653]: time="2024-07-22T01:54:44.351662167Z" level=info msg="containerd not running, starting managed containerd"
	I0722 01:56:12.994576    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[653]: time="2024-07-22T01:54:44.352568221Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0722 01:56:12.994599    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.385537385Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0722 01:56:12.994599    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410094047Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0722 01:56:12.994656    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410196253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0722 01:56:12.994705    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410261057Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410277658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410716384Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410859993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411039304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411150110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411170711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411181912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411838951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.412582896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.415628877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.415773786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.415911394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.416027401Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.416615836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.416720242Z" level=info msg="metadata content store policy set" policy=shared
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422714299Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422802804Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422827006Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422843007Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422857107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0722 01:56:12.994736    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422922611Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0722 01:56:12.995275    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423118423Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0722 01:56:12.995275    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423219929Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0722 01:56:12.995275    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423242230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423255731Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423269132Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423281733Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423294034Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423307034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423320435Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423333136Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423351737Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423364938Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423385239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423402940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423415741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423433942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423448143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423461243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423473044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423485345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423500746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423520447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423534448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423546349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423570750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423587951Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423608052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995342    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423623253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.995861    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423637354Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0722 01:56:12.995861    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423703058Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0722 01:56:12.995997    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423722259Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0722 01:56:12.995997    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423773062Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0722 01:56:12.995997    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423787463Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0722 01:56:12.995997    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423797163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0722 01:56:12.996093    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423810964Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0722 01:56:12.996093    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423821765Z" level=info msg="NRI interface is disabled by configuration."
	I0722 01:56:12.996093    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424021977Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0722 01:56:12.996093    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424155785Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0722 01:56:12.996093    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424218189Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0722 01:56:12.996093    5136 command_runner.go:130] > Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424245190Z" level=info msg="containerd successfully booted in 0.041359s"
	I0722 01:56:12.996188    5136 command_runner.go:130] > Jul 22 01:54:45 multinode-227000 dockerd[653]: time="2024-07-22T01:54:45.406219391Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0722 01:56:12.996188    5136 command_runner.go:130] > Jul 22 01:54:45 multinode-227000 dockerd[653]: time="2024-07-22T01:54:45.700643832Z" level=info msg="Loading containers: start."
	I0722 01:56:12.996188    5136 command_runner.go:130] > Jul 22 01:54:45 multinode-227000 dockerd[653]: time="2024-07-22T01:54:45.972241597Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0722 01:56:12.996188    5136 command_runner.go:130] > Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.112983850Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0722 01:56:12.996277    5136 command_runner.go:130] > Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.216705557Z" level=info msg="Loading containers: done."
	I0722 01:56:12.996277    5136 command_runner.go:130] > Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.241403888Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0722 01:56:12.996277    5136 command_runner.go:130] > Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.241954420Z" level=info msg="Daemon has completed initialization"
	I0722 01:56:12.996277    5136 command_runner.go:130] > Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.307535418Z" level=info msg="API listen on /var/run/docker.sock"
	I0722 01:56:12.996277    5136 command_runner.go:130] > Jul 22 01:54:46 multinode-227000 systemd[1]: Started Docker Application Container Engine.
	I0722 01:56:12.996346    5136 command_runner.go:130] > Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.308972301Z" level=info msg="API listen on [::]:2376"
	I0722 01:56:12.996346    5136 command_runner.go:130] > Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.894381044Z" level=info msg="Processing signal 'terminated'"
	I0722 01:56:12.996371    5136 command_runner.go:130] > Jul 22 01:55:11 multinode-227000 systemd[1]: Stopping Docker Application Container Engine...
	I0722 01:56:12.996371    5136 command_runner.go:130] > Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.896579856Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0722 01:56:12.996371    5136 command_runner.go:130] > Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.897103659Z" level=info msg="Daemon shutdown complete"
	I0722 01:56:12.996371    5136 command_runner.go:130] > Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.897197660Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0722 01:56:12.996371    5136 command_runner.go:130] > Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.897264160Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0722 01:56:12.996371    5136 command_runner.go:130] > Jul 22 01:55:12 multinode-227000 systemd[1]: docker.service: Deactivated successfully.
	I0722 01:56:12.996371    5136 command_runner.go:130] > Jul 22 01:55:12 multinode-227000 systemd[1]: Stopped Docker Application Container Engine.
	I0722 01:56:12.996596    5136 command_runner.go:130] > Jul 22 01:55:12 multinode-227000 systemd[1]: Starting Docker Application Container Engine...
	I0722 01:56:12.996596    5136 command_runner.go:130] > Jul 22 01:55:12 multinode-227000 dockerd[1090]: time="2024-07-22T01:55:12.958285167Z" level=info msg="Starting up"
	I0722 01:56:12.996728    5136 command_runner.go:130] > Jul 22 01:56:12 multinode-227000 dockerd[1090]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0722 01:56:12.996728    5136 command_runner.go:130] > Jul 22 01:56:12 multinode-227000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0722 01:56:12.996728    5136 command_runner.go:130] > Jul 22 01:56:12 multinode-227000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0722 01:56:12.996728    5136 command_runner.go:130] > Jul 22 01:56:12 multinode-227000 systemd[1]: Failed to start Docker Application Container Engine.
	I0722 01:56:13.005195    5136 out.go:177] 
	W0722 01:56:13.008180    5136 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 01:54:44 multinode-227000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 01:54:44 multinode-227000 dockerd[653]: time="2024-07-22T01:54:44.350627305Z" level=info msg="Starting up"
	Jul 22 01:54:44 multinode-227000 dockerd[653]: time="2024-07-22T01:54:44.351662167Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 01:54:44 multinode-227000 dockerd[653]: time="2024-07-22T01:54:44.352568221Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.385537385Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410094047Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410196253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410261057Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410277658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410716384Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410859993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411039304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411150110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411170711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411181912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411838951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.412582896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.415628877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.415773786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.415911394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.416027401Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.416615836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.416720242Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422714299Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422802804Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422827006Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422843007Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422857107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422922611Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423118423Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423219929Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423242230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423255731Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423269132Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423281733Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423294034Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423307034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423320435Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423333136Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423351737Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423364938Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423385239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423402940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423415741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423433942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423448143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423461243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423473044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423485345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423500746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423520447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423534448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423546349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423570750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423587951Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423608052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423623253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423637354Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423703058Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423722259Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423773062Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423787463Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423797163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423810964Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423821765Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424021977Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424155785Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424218189Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424245190Z" level=info msg="containerd successfully booted in 0.041359s"
	Jul 22 01:54:45 multinode-227000 dockerd[653]: time="2024-07-22T01:54:45.406219391Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 01:54:45 multinode-227000 dockerd[653]: time="2024-07-22T01:54:45.700643832Z" level=info msg="Loading containers: start."
	Jul 22 01:54:45 multinode-227000 dockerd[653]: time="2024-07-22T01:54:45.972241597Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.112983850Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.216705557Z" level=info msg="Loading containers: done."
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.241403888Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.241954420Z" level=info msg="Daemon has completed initialization"
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.307535418Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 01:54:46 multinode-227000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.308972301Z" level=info msg="API listen on [::]:2376"
	Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.894381044Z" level=info msg="Processing signal 'terminated'"
	Jul 22 01:55:11 multinode-227000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.896579856Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.897103659Z" level=info msg="Daemon shutdown complete"
	Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.897197660Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.897264160Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 01:55:12 multinode-227000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 01:55:12 multinode-227000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 01:55:12 multinode-227000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 01:55:12 multinode-227000 dockerd[1090]: time="2024-07-22T01:55:12.958285167Z" level=info msg="Starting up"
	Jul 22 01:56:12 multinode-227000 dockerd[1090]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 01:56:12 multinode-227000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 01:56:12 multinode-227000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 01:56:12 multinode-227000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 01:54:44 multinode-227000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 01:54:44 multinode-227000 dockerd[653]: time="2024-07-22T01:54:44.350627305Z" level=info msg="Starting up"
	Jul 22 01:54:44 multinode-227000 dockerd[653]: time="2024-07-22T01:54:44.351662167Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 01:54:44 multinode-227000 dockerd[653]: time="2024-07-22T01:54:44.352568221Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.385537385Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410094047Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410196253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410261057Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410277658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410716384Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.410859993Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411039304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411150110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411170711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411181912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.411838951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.412582896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.415628877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.415773786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.415911394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.416027401Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.416615836Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.416720242Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422714299Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422802804Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422827006Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422843007Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422857107Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.422922611Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423118423Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423219929Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423242230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423255731Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423269132Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423281733Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423294034Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423307034Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423320435Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423333136Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423351737Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423364938Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423385239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423402940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423415741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423433942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423448143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423461243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423473044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423485345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423500746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423520447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423534448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423546349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423570750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423587951Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423608052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423623253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423637354Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423703058Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423722259Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423773062Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423787463Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423797163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423810964Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.423821765Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424021977Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424155785Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424218189Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 01:54:44 multinode-227000 dockerd[659]: time="2024-07-22T01:54:44.424245190Z" level=info msg="containerd successfully booted in 0.041359s"
	Jul 22 01:54:45 multinode-227000 dockerd[653]: time="2024-07-22T01:54:45.406219391Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 01:54:45 multinode-227000 dockerd[653]: time="2024-07-22T01:54:45.700643832Z" level=info msg="Loading containers: start."
	Jul 22 01:54:45 multinode-227000 dockerd[653]: time="2024-07-22T01:54:45.972241597Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.112983850Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.216705557Z" level=info msg="Loading containers: done."
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.241403888Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.241954420Z" level=info msg="Daemon has completed initialization"
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.307535418Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 01:54:46 multinode-227000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 01:54:46 multinode-227000 dockerd[653]: time="2024-07-22T01:54:46.308972301Z" level=info msg="API listen on [::]:2376"
	Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.894381044Z" level=info msg="Processing signal 'terminated'"
	Jul 22 01:55:11 multinode-227000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.896579856Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.897103659Z" level=info msg="Daemon shutdown complete"
	Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.897197660Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 01:55:11 multinode-227000 dockerd[653]: time="2024-07-22T01:55:11.897264160Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 01:55:12 multinode-227000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 01:55:12 multinode-227000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 01:55:12 multinode-227000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 01:55:12 multinode-227000 dockerd[1090]: time="2024-07-22T01:55:12.958285167Z" level=info msg="Starting up"
	Jul 22 01:56:12 multinode-227000 dockerd[1090]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 01:56:12 multinode-227000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 01:56:12 multinode-227000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 01:56:12 multinode-227000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0722 01:56:13.008180    5136 out.go:239] * 
	* 
	W0722 01:56:13.010591    5136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 01:56:13.015376    5136 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-227000" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-227000
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-227000	172.28.193.96
multinode-227000-m02	172.28.193.41
multinode-227000-m03	172.28.193.243

                                                
                                                
After restart: multinode-227000	172.28.192.184
multinode-227000-m02	172.28.193.41
multinode-227000-m03	172.28.193.243
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-227000 -n multinode-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-227000 -n multinode-227000: exit status 6 (11.6743186s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:56:13.585644   11740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0722 01:56:25.107886   11740 status.go:417] kubeconfig endpoint: get endpoint: "multinode-227000" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-227000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (285.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (34.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-227000 node delete m03: exit status 103 (7.0804495s)

                                                
                                                
-- stdout --
	* The control-plane node multinode-227000 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-227000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:56:25.260403   11644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-windows-amd64.exe -p multinode-227000 node delete m03": exit status 103
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-227000 status --alsologtostderr: exit status 7 (15.8327445s)

                                                
                                                
-- stdout --
	multinode-227000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`
	multinode-227000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-227000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:56:32.348433   11132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0722 01:56:32.437031   11132 out.go:291] Setting OutFile to fd 1004 ...
	I0722 01:56:32.439340   11132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:56:32.439340   11132 out.go:304] Setting ErrFile to fd 768...
	I0722 01:56:32.439340   11132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:56:32.442649   11132 out.go:298] Setting JSON to false
	I0722 01:56:32.442649   11132 mustload.go:65] Loading cluster: multinode-227000
	I0722 01:56:32.442649   11132 notify.go:220] Checking for updates...
	I0722 01:56:32.453468   11132 config.go:182] Loaded profile config "multinode-227000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:56:32.453468   11132 status.go:255] checking status of multinode-227000 ...
	I0722 01:56:32.453855   11132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:56:34.615273   11132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:56:34.627504   11132 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:56:34.627504   11132 status.go:330] multinode-227000 host status = "Running" (err=<nil>)
	I0722 01:56:34.627652   11132 host.go:66] Checking if "multinode-227000" exists ...
	I0722 01:56:34.628363   11132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:56:36.718951   11132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:56:36.718951   11132 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:56:36.731661   11132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:56:39.192527   11132 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:56:39.204149   11132 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:56:39.204149   11132 host.go:66] Checking if "multinode-227000" exists ...
	I0722 01:56:39.214694   11132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 01:56:39.214694   11132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:56:41.279713   11132 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:56:41.291454   11132 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:56:41.291566   11132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:56:43.758662   11132 main.go:141] libmachine: [stdout =====>] : 172.28.192.184
	
	I0722 01:56:43.758662   11132 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:56:43.769764   11132 sshutil.go:53] new ssh client: &{IP:172.28.192.184 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:56:43.865949   11132 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6503084s)
	I0722 01:56:43.878007   11132 ssh_runner.go:195] Run: systemctl --version
	I0722 01:56:43.898820   11132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E0722 01:56:43.925550   11132 status.go:417] kubeconfig endpoint: get endpoint: "multinode-227000" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 01:56:43.925550   11132 api_server.go:166] Checking apiserver status ...
	I0722 01:56:43.938215   11132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0722 01:56:43.960719   11132 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0722 01:56:43.960719   11132 status.go:422] multinode-227000 apiserver status = Stopped (err=<nil>)
	I0722 01:56:43.960719   11132 status.go:257] multinode-227000 status: &{Name:multinode-227000 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 01:56:43.960854   11132 status.go:255] checking status of multinode-227000-m02 ...
	I0722 01:56:43.961615   11132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:56:46.008670   11132 main.go:141] libmachine: [stdout =====>] : Off
	
	I0722 01:56:46.008670   11132 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:56:46.008670   11132 status.go:330] multinode-227000-m02 host status = "Stopped" (err=<nil>)
	I0722 01:56:46.008670   11132 status.go:343] host is not running, skipping remaining checks
	I0722 01:56:46.008670   11132 status.go:257] multinode-227000-m02 status: &{Name:multinode-227000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0722 01:56:46.019624   11132 status.go:255] checking status of multinode-227000-m03 ...
	I0722 01:56:46.020829   11132 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m03 ).state
	I0722 01:56:48.065084   11132 main.go:141] libmachine: [stdout =====>] : Off
	
	I0722 01:56:48.065084   11132 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:56:48.065084   11132 status.go:330] multinode-227000-m03 host status = "Stopped" (err=<nil>)
	I0722 01:56:48.076144   11132 status.go:343] host is not running, skipping remaining checks
	I0722 01:56:48.076144   11132 status.go:257] multinode-227000-m03 status: &{Name:multinode-227000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-227000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-227000 -n multinode-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-227000 -n multinode-227000: exit status 6 (11.8154714s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:56:48.207278    4608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0722 01:56:59.860696    4608 status.go:417] kubeconfig endpoint: get endpoint: "multinode-227000" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-227000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeleteNode (34.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (38.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 stop
E0722 01:57:15.178449    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-227000 stop: exit status 1 (26.9350421s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-227000-m03"  ...
	* Stopping node "multinode-227000-m02"  ...
	* Stopping node "multinode-227000"  ...
	* Powering off "multinode-227000" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:57:00.013624    3040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-windows-amd64.exe -p multinode-227000 stop": exit status 1
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-227000 status: context deadline exceeded (0s)
multinode_test.go:354: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-227000 status" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-227000 -n multinode-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-227000 -n multinode-227000: exit status 6 (12.0106152s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:57:26.970995    4256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0722 01:57:38.818338    4256 status.go:417] kubeconfig endpoint: get endpoint: "multinode-227000" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-227000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/StopMultiNode (38.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (303.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-749900 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-749900 --driver=hyperv: exit status 1 (4m59.7473365s)

                                                
                                                
-- stdout --
	* [NoKubernetes-749900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-749900" primary control-plane node in "NoKubernetes-749900" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 02:14:03.729629    2588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-749900 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-749900 -n NoKubernetes-749900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-749900 -n NoKubernetes-749900: exit status 7 (3.7895516s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 02:19:03.532948    4056 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0722 02:19:07.121871    4056 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-749900".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-749900 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-749900:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-749900" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (303.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (10800.408s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2417440475.exe start -p stopped-upgrade-124200 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2417440475.exe start -p stopped-upgrade-124200 --memory=2200 --vm-driver=hyperv: (5m43.9589688s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2417440475.exe -p stopped-upgrade-124200 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2417440475.exe -p stopped-upgrade-124200 stop: (37.5853253s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-124200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestForceSystemdFlag (6m27s)
	TestKubernetesUpgrade (11m35s)
	TestRunningBinaryUpgrade (11m35s)
	TestStoppedBinaryUpgrade (6m40s)
	TestStoppedBinaryUpgrade/Upgrade (6m39s)

                                                
                                                
goroutine 2295 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000508ea0, 0xc0011c9bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0009902a0, {0x47a80e0, 0x2a, 0x2a}, {0x240041f?, 0x2380cf?, 0x47cb4e0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0008b9900)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0008b9900)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0006afe00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2149 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc000813130)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008344e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008344e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0008344e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:47 +0x39
testing.tRunner(0xc0008344e0, 0x2e84710)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 871 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00166ede0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 976
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 82 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 41
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 742 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc000813130)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00087c000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00087c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc00087c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc00087c000, 0x2e84630)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 992 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3400280, 0xc000054420}, 0xc000975f50, 0xc000975f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3400280, 0xc000054420}, 0xa0?, 0xc000975f50, 0xc000975f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3400280?, 0xc000054420?}, 0x0?, 0xc000003e00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000975fd0?, 0x30e4a4?, 0xc00159c4e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 993 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 992
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 991 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000861710, 0x2f)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1e97c60?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00166ec60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000861740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001200990, {0x33dc640, 0xc001472000}, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001200990, 0x3b9aca00, 0x0, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1395 [chan send, 121 minutes]:
os/exec.(*Cmd).watchCtx(0xc001829500, 0xc001b62300)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 955
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 743 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc000813130)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00087c1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00087c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc00087c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc00087c1a0, 0x2e84628)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 210 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00087e1e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 145
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 211 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00080c600, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 145
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 745 [syscall, 7 minutes, locked to thread]:
syscall.SyscallN(0x7ffc3f744e10?, {0xc000919a80?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6a0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00091d1d0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0001ff380)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0001ff380)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00087d380, 0xc0001ff380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc00087d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:91 +0x347
testing.tRunner(0xc00087d380, 0x2e84670)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 228 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 227
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 227 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3400280, 0xc000054420}, 0xc00128bf50, 0xc00128bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3400280, 0xc000054420}, 0x80?, 0xc00128bf50, 0xc00128bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3400280?, 0xc000054420?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x30e445?, 0xc0001f8300?, 0xc0006d4180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 211
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 226 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00080c410, 0x3b)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1e97c60?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00087e0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00080c600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000460500, {0x33dc640, 0xc0008bb260}, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000460500, 0x3b9aca00, 0x0, 0x1, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 211
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 744 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc000813130)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00087d1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00087d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc00087d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc00087d1e0, 0x2e84640)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1219 [chan send, 125 minutes]:
os/exec.(*Cmd).watchCtx(0xc001324d80, 0xc001438a20)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1218
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 746 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc000813130)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00087d520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00087d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc00087d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc00087d520, 0x2e84668)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2291 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc0013abb20?, 0x18283b?, 0x10?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0xc0013abbb0?, 0xc0013abbf8?, 0x18283b?, 0xc0013abbf0?, 0x1e43d1?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x4a0, {0xc00161b24f?, 0x5b1, 0x2341df?}, 0xc0013abc28?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0015abb88?, {0xc00161b24f?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0015abb88, {0xc00161b24f, 0x5b1, 0x5b1})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0018ee268, {0xc00161b24f?, 0x20a44b?, 0x210?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001563680, {0x33db200, 0xc0018ee298})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x33db340, 0xc001563680}, {0x33db200, 0xc0018ee298}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0013abe98?, {0x33db340, 0xc001563680})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x17a19e?, {0x33db340?, 0xc001563680?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x33db340, 0xc001563680}, {0x33db2c0, 0xc0018ee268}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 745
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 813 [IO wait, 161 minutes]:
internal/poll.runtime_pollWait(0x26a6fd47ca8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000514408?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0004b6ca0, 0xc001375bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0004b6c88, 0x288, {0xc0006d85a0?, 0x0?, 0x0?}, 0xc000514008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0004b6c88, 0xc001375d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0004b6c88)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc00063c440)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00063c440)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0006e20f0, {0x33f3340, 0xc00063c440})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0006e20f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00087d6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 810
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 872 [chan receive, 127 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000861740, 0xc000054420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 976
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2151 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc000813130)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000834820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000834820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc000834820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc000834820, 0x2e84728)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2224 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffc3f744e10?, {0xc0012bf960?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6e4, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00091c7b0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0001fec00)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0001fec00)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0008351e0, 0xc0001fec00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0008351e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:130 +0x788
testing.tRunner(0xc0008351e0, 0x2e84738)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2222 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc000813130)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000834ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000834ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc000834ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc000834ea0, 0x2e84758)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2307 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x197ec5?, {0xc001359b20?, 0x23ffd18?, 0xc001359b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x18fdf6?, 0x4858940?, 0xc001359bf8?, 0x1829a5?, 0x26a4a690108?, 0x4d?, 0x178ba6?, 0x10?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5d8, {0xc0013cea0e?, 0x5f2, 0x0?}, 0xc001aa4580?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00161c508?, {0xc0013cea0e?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00161c508, {0xc0013cea0e, 0x5f2, 0x5f2})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000558238, {0xc0013cea0e?, 0xc001359d98?, 0x20e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015621e0, {0x33db200, 0xc000836720})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x33db340, 0xc0015621e0}, {0x33db200, 0xc000836720}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x33db340, 0xc0015621e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x180c56?, {0x33db340?, 0xc0015621e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x33db340, 0xc0015621e0}, {0x33db2c0, 0xc000558238}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001438240?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2226
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2226 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffc3f744e10?, {0xc0012c3798?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x300, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00091d080)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0001fef00)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0001fef00)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc000835520, 0xc0001fef00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc000835520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:243 +0xaff
testing.tRunner(0xc000835520, 0x2e846d8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2225 [chan receive, 7 minutes]:
testing.(*T).Run(0xc000835380, {0x23a8307?, 0x3005753e800?}, 0xc00194e040)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc000835380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2bc
testing.tRunner(0xc000835380, 0x2e84760)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2193 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001ff380, 0xc000054b40)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 745
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2192 [syscall, locked to thread]:
syscall.SyscallN(0x197ec5?, {0xc0012e1b20?, 0x233be08?, 0xc0012e1b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x18fdf6?, 0x4858940?, 0xc0012e1bf8?, 0x1829a5?, 0x0?, 0x10000?, 0x1?, 0xc001396308?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x608, {0xc0013a294b?, 0x56b5, 0x2341df?}, 0xc001305dc0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001590788?, {0xc0013a294b?, 0x10000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001590788, {0xc0013a294b, 0x56b5, 0x56b5})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0018ee288, {0xc0013a294b?, 0xc0012e1d98?, 0x7e3f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015636b0, {0x33db200, 0xc0008367c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x33db340, 0xc0015636b0}, {0x33db200, 0xc0008367c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x33db340, 0xc0015636b0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x180c56?, {0x33db340?, 0xc0015636b0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x33db340, 0xc0015636b0}, {0x33db2c0, 0xc0018ee288}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00188de60?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 745
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2327 [select]:
os/exec.(*Cmd).watchCtx(0xc0001ff200, 0xc0008624e0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2191
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2326 [syscall, locked to thread]:
syscall.SyscallN(0x197ec5?, {0xc00134db20?, 0x233be08?, 0xc00134db58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x18fdf6?, 0x4858940?, 0xc00134dbf8?, 0x1829a5?, 0x26a4a690a28?, 0x77?, 0x0?, 0x1f8c45?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6b0, {0xc0011dc210?, 0x1df0, 0x2341df?}, 0x3aef123?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001465408?, {0xc0011dc210?, 0x4000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001465408, {0xc0011dc210, 0x1df0, 0x1df0})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0018ee0e8, {0xc0011dc210?, 0x26a4a69da88?, 0x1e38?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0017f61b0, {0x33db200, 0xc0000a7638})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x33db340, 0xc0017f61b0}, {0x33db200, 0xc0000a7638}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00134de78?, {0x33db340, 0xc0017f61b0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00134df38?, {0x33db340?, 0xc0017f61b0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x33db340, 0xc0017f61b0}, {0x33db2c0, 0xc0018ee0e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0006d45a0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2191
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2191 [syscall, locked to thread]:
syscall.SyscallN(0x7ffc3f744e10?, {0xc001bff948?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x684, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00130b050)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0001ff200)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0001ff200)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0008349c0, 0xc0001ff200)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc0008349c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:198 +0x728
testing.tRunner(0xc0008349c0, 0xc00194e040)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2225
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2325 [syscall, locked to thread]:
syscall.SyscallN(0x197ec5?, {0xc0019cdb20?, 0x233be08?, 0xc0019cdb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x18fdf6?, 0x4858940?, 0xc0019cdbf8?, 0x1829a5?, 0x0?, 0x0?, 0x0?, 0xc0012aa000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5fc, {0xc0013ce26f?, 0x591, 0x2341df?}, 0x80?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001464f08?, {0xc0013ce26f?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001464f08, {0xc0013ce26f, 0x591, 0x591})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0018ee098, {0xc0013ce26f?, 0x26a6fd478c8?, 0x20c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0017f6180, {0x33db200, 0xc0006b2768})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x33db340, 0xc0017f6180}, {0x33db200, 0xc0006b2768}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x532c05?, {0x33db340, 0xc0017f6180})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0019cdeb8?, {0x33db340?, 0xc0017f6180?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x33db340, 0xc0017f6180}, {0x33db2c0, 0xc0018ee098}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001b08c60?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2191
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2308 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x197ec5?, {0xc000927b20?, 0x23ffd18?, 0xc000927b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x18fdf6?, 0x4858940?, 0xc000927bf8?, 0x1829a5?, 0x26a4a690108?, 0xc001aa5f77?, 0x10?, 0x10?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x64c, {0xc00083a239?, 0x1dc7, 0x2341df?}, 0xc001aa4df0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00161d408?, {0xc00083a239?, 0x4000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00161d408, {0xc00083a239, 0x1dc7, 0x1dc7})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000558298, {0xc00083a239?, 0x26a4a69da88?, 0x1e46?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001562210, {0x33db200, 0xc0000a74c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x33db340, 0xc001562210}, {0x33db200, 0xc0000a74c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x10?, {0x33db340, 0xc001562210})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000927eb8?, {0x33db340?, 0xc001562210?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x33db340, 0xc001562210}, {0x33db2c0, 0xc000558298}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000202fd0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2226
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2309 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001fef00, 0xc000862600)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2226
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 2310 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x197ec5?, {0xc0012f9b20?, 0x2333bf0?, 0xc0012f9b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x18fdf6?, 0x4858940?, 0xc0012f9bf8?, 0x1829a5?, 0x26a4a690eb8?, 0x485644d?, 0x26a4a690eb8?, 0xc000836958?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6c4, {0xc00161a26f?, 0x591, 0x2341df?}, 0xc0012f9c48?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00161d688?, {0xc00161a26f?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00161d688, {0xc00161a26f, 0x591, 0x591})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000558098, {0xc00161a26f?, 0x5?, 0x20c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001562090, {0x33db200, 0xc0008360b0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x33db340, 0xc001562090}, {0x33db200, 0xc0008360b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x33db340, 0xc001562090})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x2?, {0x33db340?, 0xc001562090?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x33db340, 0xc001562090}, {0x33db2c0, 0xc000558098}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x2e84738?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2224
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2311 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x197ec5?, {0xc0011bbb20?, 0x2333bf0?, 0xc0011bbb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x18fdf6?, 0x4858940?, 0xc0011bbbf8?, 0x1829a5?, 0x26a4a690eb8?, 0x1f8e77?, 0xc0011bbbc8?, 0x1f8c45?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6cc, {0xc00097e211?, 0x1def, 0x2341df?}, 0xc0011bbc50?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00161db88?, {0xc00097e211?, 0x4000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00161db88, {0xc00097e211, 0x1def, 0x1def})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000558110, {0xc00097e211?, 0x26a4a69da88?, 0x1e39?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0015620c0, {0x33db200, 0xc0000a60c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x33db340, 0xc0015620c0}, {0x33db200, 0xc0000a60c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0011bbe78?, {0x33db340, 0xc0015620c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0011bbf38?, {0x33db340?, 0xc0015620c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x33db340, 0xc0015620c0}, {0x33db2c0, 0xc000558110}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000862840?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2224
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 2312 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001fec00, 0xc000862180)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2224
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                    

Test pass (148/201)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.04
4 TestDownloadOnly/v1.20.0/preload-exists 0.07
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.44
9 TestDownloadOnly/v1.20.0/DeleteAll 1.18
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.29
12 TestDownloadOnly/v1.30.3/json-events 12.21
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.52
18 TestDownloadOnly/v1.30.3/DeleteAll 1.33
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 1.23
21 TestDownloadOnly/v1.31.0-beta.0/json-events 14.18
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 1.19
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 1.16
30 TestBinaryMirror 7.29
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.28
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.27
36 TestAddons/Setup 447.8
39 TestAddons/parallel/Ingress 67.06
40 TestAddons/parallel/InspektorGadget 26.2
41 TestAddons/parallel/MetricsServer 22.41
42 TestAddons/parallel/HelmTiller 30.14
44 TestAddons/parallel/CSI 76.25
45 TestAddons/parallel/Headlamp 35.32
46 TestAddons/parallel/CloudSpanner 22.74
47 TestAddons/parallel/LocalPath 99.59
48 TestAddons/parallel/NvidiaDevicePlugin 21.75
49 TestAddons/parallel/Yakd 5.02
50 TestAddons/parallel/Volcano 77.63
53 TestAddons/serial/GCPAuth/Namespaces 0.34
54 TestAddons/StoppedEnableDisable 54.13
66 TestErrorSpam/start 17.79
67 TestErrorSpam/status 38.58
68 TestErrorSpam/pause 23.69
69 TestErrorSpam/unpause 23.71
70 TestErrorSpam/stop 56.9
73 TestFunctional/serial/CopySyncFile 0.03
74 TestFunctional/serial/StartWithProxy 213.12
75 TestFunctional/serial/AuditLog 0
77 TestFunctional/serial/KubeContext 0.13
81 TestFunctional/serial/CacheCmd/cache/add_remote 348.73
82 TestFunctional/serial/CacheCmd/cache/add_local 60.81
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.27
84 TestFunctional/serial/CacheCmd/cache/list 0.26
87 TestFunctional/serial/CacheCmd/cache/delete 0.53
90 TestFunctional/serial/ExtraConfig 167.28
91 TestFunctional/serial/ComponentHealth 0.22
92 TestFunctional/serial/LogsCmd 8.79
93 TestFunctional/serial/LogsFileCmd 11.05
94 TestFunctional/serial/InvalidService 21.51
100 TestFunctional/parallel/StatusCmd 45.53
104 TestFunctional/parallel/ServiceCmdConnect 33.08
105 TestFunctional/parallel/AddonsCmd 0.82
106 TestFunctional/parallel/PersistentVolumeClaim 42.58
108 TestFunctional/parallel/SSHCmd 24.67
109 TestFunctional/parallel/CpCmd 60.56
110 TestFunctional/parallel/MySQL 64.97
111 TestFunctional/parallel/FileSync 12.3
112 TestFunctional/parallel/CertSync 70.31
116 TestFunctional/parallel/NodeLabels 0.23
118 TestFunctional/parallel/NonActiveRuntimeDisabled 11.64
120 TestFunctional/parallel/License 3.02
121 TestFunctional/parallel/ServiceCmd/DeployApp 16.42
122 TestFunctional/parallel/ServiceCmd/List 14.09
123 TestFunctional/parallel/ServiceCmd/JSONOutput 13.98
124 TestFunctional/parallel/Version/short 0.27
125 TestFunctional/parallel/Version/components 8.63
126 TestFunctional/parallel/ImageCommands/ImageListShort 7.9
127 TestFunctional/parallel/ImageCommands/ImageListTable 7.94
128 TestFunctional/parallel/ImageCommands/ImageListJson 7.73
129 TestFunctional/parallel/ImageCommands/ImageListYaml 7.83
130 TestFunctional/parallel/ImageCommands/ImageBuild 28.36
131 TestFunctional/parallel/ImageCommands/Setup 2.51
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 20.07
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 10.29
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 19.66
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.75
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 20.69
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/DockerEnv/powershell 50.75
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.65
151 TestFunctional/parallel/ImageCommands/ImageRemove 17.63
152 TestFunctional/parallel/UpdateContextCmd/no_changes 2.69
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.67
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.56
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 18.28
156 TestFunctional/parallel/ProfileCmd/profile_not_create 12.32
157 TestFunctional/parallel/ProfileCmd/profile_list 12.91
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 10.34
159 TestFunctional/parallel/ProfileCmd/profile_json_output 11.9
160 TestFunctional/delete_echo-server_images 0.02
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 750.89
167 TestMultiControlPlane/serial/DeployApp 12.91
169 TestMultiControlPlane/serial/AddWorkerNode 275.41
170 TestMultiControlPlane/serial/NodeLabels 0.22
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 29.81
172 TestMultiControlPlane/serial/CopyFile 657.51
176 TestImageBuild/serial/Setup 202.16
177 TestImageBuild/serial/NormalBuild 10.17
178 TestImageBuild/serial/BuildWithBuildArg 9.31
179 TestImageBuild/serial/BuildWithDockerIgnore 7.97
180 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.78
184 TestJSONOutput/start/Command 212.8
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 7.79
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 7.72
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 34.04
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 1.43
212 TestMainNoArgs 0.23
213 TestMinikubeProfile 529.38
216 TestMountStart/serial/StartWithMountFirst 154.08
217 TestMountStart/serial/VerifyMountFirst 9.81
218 TestMountStart/serial/StartWithMountSecond 159.49
219 TestMountStart/serial/VerifyMountSecond 9.78
220 TestMountStart/serial/DeleteFirst 31.41
221 TestMountStart/serial/VerifyMountPostDelete 9.76
222 TestMountStart/serial/Stop 27.03
223 TestMountStart/serial/RestartStopped 119.07
224 TestMountStart/serial/VerifyMountPostStop 9.48
227 TestMultiNode/serial/FreshStart2Nodes 455.95
228 TestMultiNode/serial/DeployApp2Nodes 9.65
230 TestMultiNode/serial/AddNode 257.48
231 TestMultiNode/serial/MultiNodeLabels 0.2
232 TestMultiNode/serial/ProfileList 12.83
233 TestMultiNode/serial/CopyFile 379.09
234 TestMultiNode/serial/StopNode 80.06
235 TestMultiNode/serial/StartAfterStop 197.49
242 TestPreload 570.33
243 TestScheduledStopWindows 338.46
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.37
x
+
TestDownloadOnly/v1.20.0/json-events (20.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-823800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-823800 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (20.0420836s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (20.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-823800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-823800: exit status 85 (433.1745ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-823800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |          |
	|         | -p download-only-823800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:25:37
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:25:37.979164    6392 out.go:291] Setting OutFile to fd 612 ...
	I0721 23:25:37.979831    6392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:25:37.979831    6392 out.go:304] Setting ErrFile to fd 616...
	I0721 23:25:37.979831    6392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0721 23:25:37.990824    6392 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0721 23:25:37.996514    6392 out.go:298] Setting JSON to true
	I0721 23:25:38.010468    6392 start.go:129] hostinfo: {"hostname":"minikube6","uptime":119545,"bootTime":1721484792,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0721 23:25:38.010468    6392 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 23:25:38.018619    6392 out.go:97] [download-only-823800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0721 23:25:38.018619    6392 notify.go:220] Checking for updates...
	W0721 23:25:38.018619    6392 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0721 23:25:38.020685    6392 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0721 23:25:38.025674    6392 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0721 23:25:38.028419    6392 out.go:169] MINIKUBE_LOCATION=19312
	I0721 23:25:38.031491    6392 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0721 23:25:38.036363    6392 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 23:25:38.042180    6392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:25:43.446253    6392 out.go:97] Using the hyperv driver based on user configuration
	I0721 23:25:43.446253    6392 start.go:297] selected driver: hyperv
	I0721 23:25:43.446253    6392 start.go:901] validating driver "hyperv" against <nil>
	I0721 23:25:43.446253    6392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 23:25:43.499300    6392 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0721 23:25:43.500835    6392 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 23:25:43.500835    6392 cni.go:84] Creating CNI manager for ""
	I0721 23:25:43.500835    6392 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0721 23:25:43.500835    6392 start.go:340] cluster config:
	{Name:download-only-823800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-823800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:25:43.502302    6392 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:25:43.506154    6392 out.go:97] Downloading VM boot image ...
	I0721 23:25:43.506154    6392 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1721324531-19298-amd64.iso
	I0721 23:25:48.363349    6392 out.go:97] Starting "download-only-823800" primary control-plane node in "download-only-823800" cluster
	I0721 23:25:48.363349    6392 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 23:25:48.421303    6392 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0721 23:25:48.421303    6392 cache.go:56] Caching tarball of preloaded images
	I0721 23:25:48.422073    6392 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 23:25:48.426659    6392 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0721 23:25:48.426659    6392 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0721 23:25:48.540202    6392 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0721 23:25:52.981362    6392 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-823800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-823800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:25:58.013439    7632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1753863s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-823800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-823800: (1.2940069s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-451800 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-451800 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperv: (12.2127638s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-451800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-451800: exit status 85 (514.9721ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-823800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | -p download-only-823800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-823800        | download-only-823800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:26 UTC |
	| start   | -o=json --download-only        | download-only-451800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC |                     |
	|         | -p download-only-451800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:26:00
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:26:00.993500    6796 out.go:291] Setting OutFile to fd 712 ...
	I0721 23:26:00.994120    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:26:00.994120    6796 out.go:304] Setting ErrFile to fd 708...
	I0721 23:26:00.994120    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:26:01.019719    6796 out.go:298] Setting JSON to true
	I0721 23:26:01.023069    6796 start.go:129] hostinfo: {"hostname":"minikube6","uptime":119568,"bootTime":1721484792,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0721 23:26:01.023069    6796 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 23:26:01.091073    6796 out.go:97] [download-only-451800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0721 23:26:01.091686    6796 notify.go:220] Checking for updates...
	I0721 23:26:01.094583    6796 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0721 23:26:01.097776    6796 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0721 23:26:01.101212    6796 out.go:169] MINIKUBE_LOCATION=19312
	I0721 23:26:01.104040    6796 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0721 23:26:01.108824    6796 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 23:26:01.109518    6796 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:26:06.734758    6796 out.go:97] Using the hyperv driver based on user configuration
	I0721 23:26:06.735894    6796 start.go:297] selected driver: hyperv
	I0721 23:26:06.735894    6796 start.go:901] validating driver "hyperv" against <nil>
	I0721 23:26:06.735894    6796 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 23:26:06.790530    6796 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0721 23:26:06.791881    6796 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 23:26:06.792070    6796 cni.go:84] Creating CNI manager for ""
	I0721 23:26:06.792070    6796 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 23:26:06.792070    6796 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 23:26:06.792070    6796 start.go:340] cluster config:
	{Name:download-only-451800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-451800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 23:26:06.792070    6796 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:26:06.795586    6796 out.go:97] Starting "download-only-451800" primary control-plane node in "download-only-451800" cluster
	I0721 23:26:06.796104    6796 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 23:26:06.856737    6796 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 23:26:06.857203    6796 cache.go:56] Caching tarball of preloaded images
	I0721 23:26:06.857416    6796 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 23:26:06.863207    6796 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0721 23:26:06.863318    6796 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0721 23:26:06.978140    6796 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 23:26:10.962770    6796 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0721 23:26:10.963866    6796 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-451800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-451800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:26:13.152120    2560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (1.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3311077s)
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (1.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (1.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-451800
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-451800: (1.2319427s)
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (1.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (14.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-258200 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-258200 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperv: (14.1763255s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (14.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-258200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-258200: exit status 85 (287.2226ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-823800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:25 UTC |                     |
	|         | -p download-only-823800             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:25 UTC |
	| delete  | -p download-only-823800             | download-only-823800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:25 UTC | 21 Jul 24 23:26 UTC |
	| start   | -o=json --download-only             | download-only-451800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC |                     |
	|         | -p download-only-451800             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC | 21 Jul 24 23:26 UTC |
	| delete  | -p download-only-451800             | download-only-451800 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC | 21 Jul 24 23:26 UTC |
	| start   | -o=json --download-only             | download-only-258200 | minikube6\jenkins | v1.33.1 | 21 Jul 24 23:26 UTC |                     |
	|         | -p download-only-258200             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 23:26:16
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 23:26:16.302533    9232 out.go:291] Setting OutFile to fd 704 ...
	I0721 23:26:16.303202    9232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:26:16.303202    9232 out.go:304] Setting ErrFile to fd 716...
	I0721 23:26:16.303463    9232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 23:26:16.327900    9232 out.go:298] Setting JSON to true
	I0721 23:26:16.331232    9232 start.go:129] hostinfo: {"hostname":"minikube6","uptime":119583,"bootTime":1721484792,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0721 23:26:16.331232    9232 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 23:26:16.339169    9232 out.go:97] [download-only-258200] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0721 23:26:16.340102    9232 notify.go:220] Checking for updates...
	I0721 23:26:16.342112    9232 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0721 23:26:16.345117    9232 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0721 23:26:16.348434    9232 out.go:169] MINIKUBE_LOCATION=19312
	I0721 23:26:16.351122    9232 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0721 23:26:16.356107    9232 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 23:26:16.356107    9232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 23:26:21.939909    9232 out.go:97] Using the hyperv driver based on user configuration
	I0721 23:26:21.940141    9232 start.go:297] selected driver: hyperv
	I0721 23:26:21.940302    9232 start.go:901] validating driver "hyperv" against <nil>
	I0721 23:26:21.940402    9232 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 23:26:21.987413    9232 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0721 23:26:21.988126    9232 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 23:26:21.988126    9232 cni.go:84] Creating CNI manager for ""
	I0721 23:26:21.988126    9232 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 23:26:21.988126    9232 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 23:26:21.988852    9232 start.go:340] cluster config:
	{Name:download-only-258200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-258200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0721 23:26:21.989216    9232 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 23:26:21.993512    9232 out.go:97] Starting "download-only-258200" primary control-plane node in "download-only-258200" cluster
	I0721 23:26:21.993512    9232 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0721 23:26:22.049506    9232 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0721 23:26:22.049506    9232 cache.go:56] Caching tarball of preloaded images
	I0721 23:26:22.050201    9232 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0721 23:26:22.053428    9232 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0721 23:26:22.053590    9232 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0721 23:26:22.172880    9232 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0721 23:26:26.038568    9232 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0721 23:26:26.039941    9232 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-258200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-258200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:26:30.396058    7900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (1.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1910707s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (1.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-258200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-258200: (1.1604299s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.16s)

                                                
                                    
x
+
TestBinaryMirror (7.29s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-264300 --alsologtostderr --binary-mirror http://127.0.0.1:51198 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-264300 --alsologtostderr --binary-mirror http://127.0.0.1:51198 --driver=hyperv: (6.3958003s)
helpers_test.go:175: Cleaning up "binary-mirror-264300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-264300
--- PASS: TestBinaryMirror (7.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-979300
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-979300: exit status 85 (277.6866ms)

                                                
                                                
-- stdout --
	* Profile "addons-979300" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-979300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:26:43.860745     184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-979300
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-979300: exit status 85 (268.055ms)

                                                
                                                
-- stdout --
	* Profile "addons-979300" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-979300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0721 23:26:43.862910    7704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                    
x
+
TestAddons/Setup (447.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-979300 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-979300 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m27.7972444s)
--- PASS: TestAddons/Setup (447.80s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (67.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-979300 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-979300 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-979300 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6ba3506d-524e-4e7c-815c-c06f08ac57e5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6ba3506d-524e-4e7c-815c-c06f08ac57e5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0201508s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.0583662s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-979300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0721 23:36:09.891139    6260 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-979300 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 ip: (2.5815897s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.28.202.6
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 addons disable ingress-dns --alsologtostderr -v=1: (16.8122105s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 addons disable ingress --alsologtostderr -v=1: (22.4985791s)
--- PASS: TestAddons/parallel/Ingress (67.06s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (26.2s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cww5g" [e8f4e155-445a-40f9-bdad-0103ec53f3ad] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0111049s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-979300
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-979300: (21.1859223s)
--- PASS: TestAddons/parallel/InspektorGadget (26.20s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 10.6429ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-t9m2z" [ae86f906-9466-41f1-b58a-c948643b8eba] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0155566s
addons_test.go:417: (dbg) Run:  kubectl --context addons-979300 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 addons disable metrics-server --alsologtostderr -v=1: (16.174635s)
--- PASS: TestAddons/parallel/MetricsServer (22.41s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.14s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 9.3422ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-4l99d" [a21ef14e-8f4b-41cc-8386-13413d02987f] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0157009s
addons_test.go:475: (dbg) Run:  kubectl --context addons-979300 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-979300 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.2342635s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 addons disable helm-tiller --alsologtostderr -v=1: (15.8551989s)
--- PASS: TestAddons/parallel/HelmTiller (30.14s)

                                                
                                    
x
+
TestAddons/parallel/CSI (76.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 14.2082ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-979300 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-979300 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8df731c9-ab93-48f5-a9f3-bd5739ba65e2] Pending
helpers_test.go:344: "task-pv-pod" [8df731c9-ab93-48f5-a9f3-bd5739ba65e2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8df731c9-ab93-48f5-a9f3-bd5739ba65e2] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.0143109s
addons_test.go:586: (dbg) Run:  kubectl --context addons-979300 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-979300 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-979300 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-979300 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-979300 delete pod task-pv-pod: (1.0167828s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-979300 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-979300 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-979300 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [697865f4-4f27-41b9-a452-b52470c783b9] Pending
helpers_test.go:344: "task-pv-pod-restore" [697865f4-4f27-41b9-a452-b52470c783b9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [697865f4-4f27-41b9-a452-b52470c783b9] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0124485s
addons_test.go:628: (dbg) Run:  kubectl --context addons-979300 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-979300 delete pod task-pv-pod-restore: (1.7435376s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-979300 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-979300 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 addons disable csi-hostpath-driver --alsologtostderr -v=1: (22.3033314s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 addons disable volumesnapshots --alsologtostderr -v=1: (15.868107s)
--- PASS: TestAddons/parallel/CSI (76.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (35.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-979300 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-979300 --alsologtostderr -v=1: (16.2721124s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-962bd" [c173715f-b345-420e-a8c9-4a58fc29a084] Pending
helpers_test.go:344: "headlamp-7867546754-962bd" [c173715f-b345-420e-a8c9-4a58fc29a084] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-962bd" [c173715f-b345-420e-a8c9-4a58fc29a084] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.0365777s
--- PASS: TestAddons/parallel/Headlamp (35.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-6qd82" [100c04c1-c805-47b0-ba3c-78f297801304] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0129272s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-979300
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-979300: (16.7078125s)
--- PASS: TestAddons/parallel/CloudSpanner (22.74s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (99.59s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-979300 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-979300 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [14aca3e9-3e3e-4b20-915f-60844f7d390c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [14aca3e9-3e3e-4b20-915f-60844f7d390c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [14aca3e9-3e3e-4b20-915f-60844f7d390c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0192974s
addons_test.go:992: (dbg) Run:  kubectl --context addons-979300 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 ssh "cat /opt/local-path-provisioner/pvc-271e3385-5895-4e4b-bd9d-59b933322d79_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 ssh "cat /opt/local-path-provisioner/pvc-271e3385-5895-4e4b-bd9d-59b933322d79_default_test-pvc/file1": (11.3158426s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-979300 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-979300 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m1.9023494s)
--- PASS: TestAddons/parallel/LocalPath (99.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.75s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gllqd" [baa3276f-8d84-4ae7-a9fb-d8ba754b2fc4] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0201002s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-979300
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-979300: (15.7160877s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.75s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-8cf9k" [4431f9e5-d7e8-47e9-bf8d-315cda1774a0] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0178634s
--- PASS: TestAddons/parallel/Yakd (5.02s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (77.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 30.5271ms
addons_test.go:897: volcano-admission stabilized in 30.9786ms
addons_test.go:889: volcano-scheduler stabilized in 31.5133ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-mqbc8" [5614b6b1-9d08-40b1-be81-331c4f0eaaf0] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 6.018465s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-zjmsj" [0286a832-55b3-4c79-94aa-3c6f4cae1629] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 6.014554s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-h65gp" [b5da68d0-1160-431f-85a0-1e793eed4f12] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0229913s
addons_test.go:924: (dbg) Run:  kubectl --context addons-979300 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-979300 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-979300 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [619369ff-78db-4515-98a8-6769bee65198] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [619369ff-78db-4515-98a8-6769bee65198] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 32.03051s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-979300 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-979300 addons disable volcano --alsologtostderr -v=1: (27.5568609s)
--- PASS: TestAddons/parallel/Volcano (77.63s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-979300 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-979300 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (54.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-979300
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-979300: (41.454903s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-979300
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-979300: (5.1824697s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-979300
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-979300: (4.9009777s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-979300
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-979300: (2.5900705s)
--- PASS: TestAddons/StoppedEnableDisable (54.13s)

                                                
                                    
x
+
TestErrorSpam/start (17.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 start --dry-run: (5.9253808s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 start --dry-run: (5.9704292s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 start --dry-run: (5.8912521s)
--- PASS: TestErrorSpam/start (17.79s)

                                                
                                    
x
+
TestErrorSpam/status (38.58s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 status: (13.020607s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 status: (12.5971622s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 status: (12.9565706s)
--- PASS: TestErrorSpam/status (38.58s)

                                                
                                    
x
+
TestErrorSpam/pause (23.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 pause: (8.1060285s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 pause: (7.8429703s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 pause: (7.7418617s)
--- PASS: TestErrorSpam/pause (23.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 unpause: (7.9208484s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 unpause
E0721 23:44:11.859695    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:44:11.874880    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:44:11.890462    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:44:11.922300    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:44:11.969616    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:44:12.063826    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:44:12.238853    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:44:12.574054    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:44:13.223054    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:44:14.511999    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:44:17.072512    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 unpause: (7.9479737s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 unpause
E0721 23:44:22.194999    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 unpause: (7.8337423s)
--- PASS: TestErrorSpam/unpause (23.71s)

                                                
                                    
x
+
TestErrorSpam/stop (56.9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 stop
E0721 23:44:32.442609    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:44:52.926953    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 stop: (34.3573606s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 stop: (11.5259113s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-420400 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-420400 stop: (11.0128582s)
--- PASS: TestErrorSpam/stop (56.90s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5100\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (213.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-264400 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0721 23:46:55.822643    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0721 23:49:11.850290    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-264400 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m33.1082448s)
--- PASS: TestFunctional/serial/StartWithProxy (213.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (348.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 cache add registry.k8s.io/pause:3.1
E0721 23:59:11.869232    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 cache add registry.k8s.io/pause:3.1: (1m47.7378544s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 cache add registry.k8s.io/pause:3.3
E0722 00:00:35.036446    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 cache add registry.k8s.io/pause:3.3: (2m0.5016961s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 cache add registry.k8s.io/pause:latest: (2m0.4866116s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (348.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (60.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-264400 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2025578561\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-264400 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2025578561\001: (2.3237902s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 cache add minikube-local-cache-test:functional-264400
E0722 00:04:11.866283    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 cache add minikube-local-cache-test:functional-264400: (58.002442s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 cache delete minikube-local-cache-test:functional-264400
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-264400
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (60.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (167.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-264400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0722 00:17:15.057903    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-264400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m47.280187s)
functional_test.go:757: restart took 2m47.2811062s for "functional-264400" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (167.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-264400 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 logs: (8.7933522s)
--- PASS: TestFunctional/serial/LogsCmd (8.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (11.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4101950548\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4101950548\001\logs.txt: (11.0469874s)
--- PASS: TestFunctional/serial/LogsFileCmd (11.05s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-264400 apply -f testdata\invalidsvc.yaml
E0722 00:19:11.883332    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-264400
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-264400: exit status 115 (17.0910413s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.28.193.97:30247 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:19:14.399164    5840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-264400 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (21.51s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (45.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 status: (15.0616394s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (16.1311936s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 status -o json: (14.3401832s)
--- PASS: TestFunctional/parallel/StatusCmd (45.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (33.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-264400 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-264400 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-k5t2n" [57821c34-239e-4b32-a991-fdfe17e53bce] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-k5t2n" [57821c34-239e-4b32-a991-fdfe17e53bce] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.0237386s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 service hello-node-connect --url: (19.6259293s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.28.193.97:31620
functional_test.go:1671: http://172.28.193.97:31620: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-k5t2n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.28.193.97:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.28.193.97:31620
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (33.08s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [74960e69-07c9-4691-813e-241a68c609e7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.015999s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-264400 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-264400 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-264400 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-264400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2b4d7bc8-9893-4a79-8722-15308c09fa22] Pending
helpers_test.go:344: "sp-pod" [2b4d7bc8-9893-4a79-8722-15308c09fa22] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2b4d7bc8-9893-4a79-8722-15308c09fa22] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.0106709s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-264400 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-264400 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-264400 delete -f testdata/storage-provisioner/pod.yaml: (1.5019158s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-264400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b2575545-a568-4c9b-b96a-eec9f4befd17] Pending
helpers_test.go:344: "sp-pod" [b2575545-a568-4c9b-b96a-eec9f4befd17] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b2575545-a568-4c9b-b96a-eec9f4befd17] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0116142s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-264400 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (24.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh "echo hello": (11.6987515s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh "cat /etc/hostname": (12.9699364s)
--- PASS: TestFunctional/parallel/SSHCmd (24.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (60.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.0158105s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh -n functional-264400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh -n functional-264400 "sudo cat /home/docker/cp-test.txt": (10.4351418s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 cp functional-264400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd4062090943\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 cp functional-264400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd4062090943\001\cp-test.txt: (10.7648646s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh -n functional-264400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh -n functional-264400 "sudo cat /home/docker/cp-test.txt": (10.7452739s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (9.0226676s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh -n functional-264400 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh -n functional-264400 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.5706137s)
--- PASS: TestFunctional/parallel/CpCmd (60.56s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (64.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-264400 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-vkbf7" [0deeb8ed-8002-44da-b98d-8d9a471ca371] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-vkbf7" [0deeb8ed-8002-44da-b98d-8d9a471ca371] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 50.0078475s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-264400 exec mysql-64454c8b5c-vkbf7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-264400 exec mysql-64454c8b5c-vkbf7 -- mysql -ppassword -e "show databases;": exit status 1 (323.683ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-264400 exec mysql-64454c8b5c-vkbf7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-264400 exec mysql-64454c8b5c-vkbf7 -- mysql -ppassword -e "show databases;": exit status 1 (278.4045ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-264400 exec mysql-64454c8b5c-vkbf7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-264400 exec mysql-64454c8b5c-vkbf7 -- mysql -ppassword -e "show databases;": exit status 1 (316.7638ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-264400 exec mysql-64454c8b5c-vkbf7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-264400 exec mysql-64454c8b5c-vkbf7 -- mysql -ppassword -e "show databases;": exit status 1 (325.0639ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-264400 exec mysql-64454c8b5c-vkbf7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-264400 exec mysql-64454c8b5c-vkbf7 -- mysql -ppassword -e "show databases;": exit status 1 (267.9922ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-264400 exec mysql-64454c8b5c-vkbf7 -- mysql -ppassword -e "show databases;"
E0722 00:24:11.889344    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (64.97s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (12.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/5100/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /etc/test/nested/copy/5100/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /etc/test/nested/copy/5100/hosts": (12.298513s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (12.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (70.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/5100.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /etc/ssl/certs/5100.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /etc/ssl/certs/5100.pem": (11.4020199s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/5100.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /usr/share/ca-certificates/5100.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /usr/share/ca-certificates/5100.pem": (12.3915335s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.7753371s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/51002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /etc/ssl/certs/51002.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /etc/ssl/certs/51002.pem": (11.734127s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/51002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /usr/share/ca-certificates/51002.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /usr/share/ca-certificates/51002.pem": (11.4677884s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (11.5361211s)
--- PASS: TestFunctional/parallel/CertSync (70.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-264400 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (11.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 ssh "sudo systemctl is-active crio": exit status 1 (11.6419164s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:20:10.921529    5164 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (11.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.0036313s)
--- PASS: TestFunctional/parallel/License (3.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-264400 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-264400 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-rxqdq" [6d0480c3-f28c-4d42-a935-886432348653] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-rxqdq" [6d0480c3-f28c-4d42-a935-886432348653] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.0114102s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 service list: (14.087479s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (13.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 service list -o json: (13.9797824s)
functional_test.go:1490: Took "13.9798631s" to run "out/minikube-windows-amd64.exe -p functional-264400 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (13.98s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 version --short
--- PASS: TestFunctional/parallel/Version/short (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 version -o=json --components: (8.6296228s)
--- PASS: TestFunctional/parallel/Version/components (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image ls --format short --alsologtostderr: (7.8994712s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-264400 image ls --format short --alsologtostderr:
registry.k8s.io/pause:3.9
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kicbase/echo-server:functional-264400
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-264400 image ls --format short --alsologtostderr:
W0722 00:22:35.969183    9424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0722 00:22:36.050185    9424 out.go:291] Setting OutFile to fd 616 ...
I0722 00:22:36.051180    9424 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:22:36.051180    9424 out.go:304] Setting ErrFile to fd 936...
I0722 00:22:36.051180    9424 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:22:36.067187    9424 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 00:22:36.067187    9424 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 00:22:36.068189    9424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
I0722 00:22:38.453273    9424 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0722 00:22:38.453273    9424 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:38.467328    9424 ssh_runner.go:195] Run: systemctl --version
I0722 00:22:38.467328    9424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
I0722 00:22:40.813200    9424 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0722 00:22:40.813820    9424 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:40.813820    9424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
I0722 00:22:43.566584    9424 main.go:141] libmachine: [stdout =====>] : 172.28.193.97

                                                
                                                
I0722 00:22:43.566584    9424 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:43.568156    9424 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
I0722 00:22:43.678172    9424 ssh_runner.go:235] Completed: systemctl --version: (5.2107835s)
I0722 00:22:43.693358    9424 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image ls --format table --alsologtostderr: (7.9390071s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-264400 image ls --format table --alsologtostderr:
|-----------------------------------------|-------------------|---------------|--------|
|                  Image                  |        Tag        |   Image ID    |  Size  |
|-----------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                 | alpine            | 099a2d701db1f | 43.2MB |
| docker.io/kicbase/echo-server           | functional-264400 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver          | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/library/nginx                 | latest            | fffffc90d343c | 188MB  |
| registry.k8s.io/etcd                    | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/library/mysql                 | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                   | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.30.3           | 3edc18e7b7672 | 62MB   |
| registry.k8s.io/kube-proxy              | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
|-----------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-264400 image ls --format table --alsologtostderr:
W0722 00:22:54.041162   12468 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0722 00:22:54.124172   12468 out.go:291] Setting OutFile to fd 860 ...
I0722 00:22:54.125188   12468 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:22:54.125188   12468 out.go:304] Setting ErrFile to fd 968...
I0722 00:22:54.125188   12468 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:22:54.140156   12468 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 00:22:54.140156   12468 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 00:22:54.141154   12468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
I0722 00:22:56.534154   12468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0722 00:22:56.534234   12468 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:56.549377   12468 ssh_runner.go:195] Run: systemctl --version
I0722 00:22:56.549377   12468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
I0722 00:22:58.860844   12468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0722 00:22:58.861714   12468 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:58.861714   12468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
I0722 00:23:01.674641   12468 main.go:141] libmachine: [stdout =====>] : 172.28.193.97

                                                
                                                
I0722 00:23:01.674641   12468 main.go:141] libmachine: [stderr =====>] : 
I0722 00:23:01.675638   12468 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
I0722 00:23:01.784793   12468 ssh_runner.go:235] Completed: systemctl --version: (5.235355s)
I0722 00:23:01.792783   12468 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image ls --format json --alsologtostderr: (7.7309629s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-264400 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a53
8410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-264400"],"size":"4940000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-264400 image ls --format json --alsologtostderr:
W0722 00:22:46.302485   10168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0722 00:22:46.386083   10168 out.go:291] Setting OutFile to fd 668 ...
I0722 00:22:46.386083   10168 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:22:46.386083   10168 out.go:304] Setting ErrFile to fd 580...
I0722 00:22:46.386083   10168 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:22:46.401669   10168 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 00:22:46.401669   10168 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 00:22:46.402689   10168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
I0722 00:22:48.725889   10168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0722 00:22:48.726176   10168 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:48.738931   10168 ssh_runner.go:195] Run: systemctl --version
I0722 00:22:48.738931   10168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
I0722 00:22:51.077591   10168 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0722 00:22:51.077591   10168 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:51.077861   10168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
I0722 00:22:53.752394   10168 main.go:141] libmachine: [stdout =====>] : 172.28.193.97

                                                
                                                
I0722 00:22:53.752543   10168 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:53.752543   10168 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
I0722 00:22:53.852972   10168 ssh_runner.go:235] Completed: systemctl --version: (5.1139806s)
I0722 00:22:53.862968   10168 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image ls --format yaml --alsologtostderr: (7.8301167s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-264400 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-264400
size: "4940000"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-264400 image ls --format yaml --alsologtostderr:
W0722 00:22:38.470261    6912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0722 00:22:38.546279    6912 out.go:291] Setting OutFile to fd 872 ...
I0722 00:22:38.562265    6912 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:22:38.562265    6912 out.go:304] Setting ErrFile to fd 464...
I0722 00:22:38.562265    6912 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:22:38.577262    6912 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 00:22:38.577262    6912 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 00:22:38.578278    6912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
I0722 00:22:40.891986    6912 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0722 00:22:40.891986    6912 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:40.906958    6912 ssh_runner.go:195] Run: systemctl --version
I0722 00:22:40.906958    6912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
I0722 00:22:43.233444    6912 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0722 00:22:43.233748    6912 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:43.233841    6912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
I0722 00:22:46.012178    6912 main.go:141] libmachine: [stdout =====>] : 172.28.193.97

                                                
                                                
I0722 00:22:46.012315    6912 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:46.012723    6912 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
I0722 00:22:46.119729    6912 ssh_runner.go:235] Completed: systemctl --version: (5.2127099s)
I0722 00:22:46.128724    6912 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-264400 ssh pgrep buildkitd: exit status 1 (9.9341586s)

                                                
                                                
** stderr ** 
	W0722 00:22:43.862779   13044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image build -t localhost/my-image:functional-264400 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image build -t localhost/my-image:functional-264400 testdata\build --alsologtostderr: (10.8681983s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-264400 image build -t localhost/my-image:functional-264400 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 92f10fef26da
---> Removed intermediate container 92f10fef26da
---> ce69bc4b4ee4
Step 3/3 : ADD content.txt /
---> 18de8ccd0941
Successfully built 18de8ccd0941
Successfully tagged localhost/my-image:functional-264400
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-264400 image build -t localhost/my-image:functional-264400 testdata\build --alsologtostderr:
W0722 00:22:53.798054    6356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0722 00:22:53.872973    6356 out.go:291] Setting OutFile to fd 900 ...
I0722 00:22:53.890545    6356 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:22:53.890545    6356 out.go:304] Setting ErrFile to fd 616...
I0722 00:22:53.890545    6356 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 00:22:53.922550    6356 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 00:22:53.941554    6356 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 00:22:53.943545    6356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
I0722 00:22:56.342904    6356 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0722 00:22:56.342904    6356 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:56.357346    6356 ssh_runner.go:195] Run: systemctl --version
I0722 00:22:56.357346    6356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-264400 ).state
I0722 00:22:58.661392    6356 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0722 00:22:58.662201    6356 main.go:141] libmachine: [stderr =====>] : 
I0722 00:22:58.662265    6356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-264400 ).networkadapters[0]).ipaddresses[0]
I0722 00:23:01.489492    6356 main.go:141] libmachine: [stdout =====>] : 172.28.193.97

                                                
                                                
I0722 00:23:01.489592    6356 main.go:141] libmachine: [stderr =====>] : 
I0722 00:23:01.490022    6356 sshutil.go:53] new ssh client: &{IP:172.28.193.97 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-264400\id_rsa Username:docker}
I0722 00:23:01.595889    6356 ssh_runner.go:235] Completed: systemctl --version: (5.2384821s)
I0722 00:23:01.595889    6356 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2698862939.tar
I0722 00:23:01.609278    6356 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0722 00:23:01.640280    6356 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2698862939.tar
I0722 00:23:01.647681    6356 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2698862939.tar: stat -c "%s %y" /var/lib/minikube/build/build.2698862939.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2698862939.tar': No such file or directory
I0722 00:23:01.647785    6356 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2698862939.tar --> /var/lib/minikube/build/build.2698862939.tar (3072 bytes)
I0722 00:23:01.715256    6356 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2698862939
I0722 00:23:01.750957    6356 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2698862939 -xf /var/lib/minikube/build/build.2698862939.tar
I0722 00:23:01.770782    6356 docker.go:360] Building image: /var/lib/minikube/build/build.2698862939
I0722 00:23:01.780943    6356 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-264400 /var/lib/minikube/build/build.2698862939
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0722 00:23:04.436052    6356 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-264400 /var/lib/minikube/build/build.2698862939: (2.6550036s)
I0722 00:23:04.449670    6356 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2698862939
I0722 00:23:04.505523    6356 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2698862939.tar
I0722 00:23:04.531718    6356 build_images.go:217] Built localhost/my-image:functional-264400 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2698862939.tar
I0722 00:23:04.531938    6356 build_images.go:133] succeeded building to: functional-264400
I0722 00:23:04.531938    6356 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image ls: (7.5496352s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.2325674s)
functional_test.go:346: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-264400
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (20.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image load --daemon kicbase/echo-server:functional-264400 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image load --daemon kicbase/echo-server:functional-264400 --alsologtostderr: (10.4726436s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image ls: (9.5994219s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (20.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-264400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-264400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-264400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 12632: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 11336: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-264400 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image load --daemon kicbase/echo-server:functional-264400 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image load --daemon kicbase/echo-server:functional-264400 --alsologtostderr: (10.7081461s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image ls: (8.9517224s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (19.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-264400 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-264400 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [477e7501-a242-41ed-bce9-958c560c29b0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [477e7501-a242-41ed-bce9-958c560c29b0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0202482s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (20.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:234: (dbg) Done: docker pull kicbase/echo-server:latest: (1.0195427s)
functional_test.go:239: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-264400
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image load --daemon kicbase/echo-server:functional-264400 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image load --daemon kicbase/echo-server:functional-264400 --alsologtostderr: (10.5450511s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image ls: (8.9092908s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (20.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-264400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 12712: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (50.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-264400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-264400"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-264400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-264400": (33.6589119s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-264400 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-264400 docker-env | Invoke-Expression ; docker images": (17.0746206s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (50.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image save kicbase/echo-server:functional-264400 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image save kicbase/echo-server:functional-264400 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (9.6488609s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (17.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image rm kicbase/echo-server:functional-264400 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image rm kicbase/echo-server:functional-264400 --alsologtostderr: (9.0409134s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image ls: (8.591885s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (17.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 update-context --alsologtostderr -v=2: (2.692709s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 update-context --alsologtostderr -v=2: (2.6728281s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 update-context --alsologtostderr -v=2: (2.5626062s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (9.34028s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image ls: (8.9339553s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (12.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.8091439s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (12.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (12.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (12.6693627s)
functional_test.go:1311: Took "12.6698395s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "243.9024ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (12.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi kicbase/echo-server:functional-264400
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-264400 image save --daemon kicbase/echo-server:functional-264400 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-264400 image save --daemon kicbase/echo-server:functional-264400 --alsologtostderr: (9.8620729s)
functional_test.go:428: (dbg) Run:  docker image inspect kicbase/echo-server:functional-264400
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (11.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.6496446s)
functional_test.go:1362: Took "11.6506441s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "248.7685ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (11.90s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:1.0: context deadline exceeded (102.1µs)
functional_test.go:191: failed to remove image "kicbase/echo-server:1.0" from docker images. args "docker rmi -f kicbase/echo-server:1.0": context deadline exceeded
functional_test.go:189: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-264400
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:functional-264400: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "kicbase/echo-server:functional-264400" from docker images. args "docker rmi -f kicbase/echo-server:functional-264400": context deadline exceeded
--- PASS: TestFunctional/delete_echo-server_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-264400
functional_test.go:197: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-264400: context deadline exceeded (0s)
functional_test.go:199: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-264400": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-264400
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-264400: context deadline exceeded (0s)
functional_test.go:207: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-264400": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (750.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-474700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0722 00:29:11.891317    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 00:29:32.361577    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:32.376949    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:32.392231    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:32.424257    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:32.472281    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:32.566100    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:32.737846    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:33.065032    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:33.711446    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:34.997405    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:37.561078    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:42.689938    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:29:52.943073    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:30:13.438120    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:30:54.410543    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:32:16.335811    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:33:55.076743    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 00:34:11.884876    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 00:34:32.374687    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 00:35:00.192418    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-474700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m52.827022s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 status -v=7 --alsologtostderr: (38.0668056s)
--- PASS: TestMultiControlPlane/serial/StartCluster (750.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- rollout status deployment/busybox
E0722 00:39:11.889557    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-474700 -- rollout status deployment/busybox: (4.8229979s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-7fbtz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-7fbtz -- nslookup kubernetes.io: (1.8231586s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-sv6jt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-tdwp8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-7fbtz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-sv6jt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-tdwp8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-7fbtz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-sv6jt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-474700 -- exec busybox-fc5497c4f-tdwp8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (275.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-474700 -v=7 --alsologtostderr
E0722 00:44:11.899257    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-474700 -v=7 --alsologtostderr: (3m43.9867494s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 status -v=7 --alsologtostderr
E0722 00:44:32.380805    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 status -v=7 --alsologtostderr: (51.4210071s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (275.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-474700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (29.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (29.8146772s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (29.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (657.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 status --output json -v=7 --alsologtostderr
E0722 00:45:55.565208    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 status --output json -v=7 --alsologtostderr: (50.6083033s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp testdata\cp-test.txt ha-474700:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp testdata\cp-test.txt ha-474700:/home/docker/cp-test.txt: (10.0103712s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test.txt": (9.9635792s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4265906700\001\cp-test_ha-474700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4265906700\001\cp-test_ha-474700.txt: (9.9288774s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test.txt": (10.1047069s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700:/home/docker/cp-test.txt ha-474700-m02:/home/docker/cp-test_ha-474700_ha-474700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700:/home/docker/cp-test.txt ha-474700-m02:/home/docker/cp-test_ha-474700_ha-474700-m02.txt: (17.2898954s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test.txt": (10.021666s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test_ha-474700_ha-474700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test_ha-474700_ha-474700-m02.txt": (9.9144057s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700:/home/docker/cp-test.txt ha-474700-m03:/home/docker/cp-test_ha-474700_ha-474700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700:/home/docker/cp-test.txt ha-474700-m03:/home/docker/cp-test_ha-474700_ha-474700-m03.txt: (17.3794136s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test.txt": (10.0278423s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test_ha-474700_ha-474700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test_ha-474700_ha-474700-m03.txt": (9.8989928s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700:/home/docker/cp-test.txt ha-474700-m04:/home/docker/cp-test_ha-474700_ha-474700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700:/home/docker/cp-test.txt ha-474700-m04:/home/docker/cp-test_ha-474700_ha-474700-m04.txt: (17.3336124s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test.txt": (9.8630006s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test_ha-474700_ha-474700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test_ha-474700_ha-474700-m04.txt": (9.915539s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp testdata\cp-test.txt ha-474700-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp testdata\cp-test.txt ha-474700-m02:/home/docker/cp-test.txt: (9.9768114s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test.txt"
E0722 00:49:11.895945    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test.txt": (9.9111504s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4265906700\001\cp-test_ha-474700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4265906700\001\cp-test_ha-474700-m02.txt: (10.0012819s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test.txt"
E0722 00:49:32.391600    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test.txt": (10.0275976s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m02:/home/docker/cp-test.txt ha-474700:/home/docker/cp-test_ha-474700-m02_ha-474700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m02:/home/docker/cp-test.txt ha-474700:/home/docker/cp-test_ha-474700-m02_ha-474700.txt: (17.4588995s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test.txt": (10.001234s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test_ha-474700-m02_ha-474700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test_ha-474700-m02_ha-474700.txt": (9.9384662s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m02:/home/docker/cp-test.txt ha-474700-m03:/home/docker/cp-test_ha-474700-m02_ha-474700-m03.txt
E0722 00:50:35.100837    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m02:/home/docker/cp-test.txt ha-474700-m03:/home/docker/cp-test_ha-474700-m02_ha-474700-m03.txt: (17.3112543s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test.txt": (9.9386017s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test_ha-474700-m02_ha-474700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test_ha-474700-m02_ha-474700-m03.txt": (9.9199213s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m02:/home/docker/cp-test.txt ha-474700-m04:/home/docker/cp-test_ha-474700-m02_ha-474700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m02:/home/docker/cp-test.txt ha-474700-m04:/home/docker/cp-test_ha-474700-m02_ha-474700-m04.txt: (17.2286391s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test.txt": (9.8918048s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test_ha-474700-m02_ha-474700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test_ha-474700-m02_ha-474700-m04.txt": (9.9617209s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp testdata\cp-test.txt ha-474700-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp testdata\cp-test.txt ha-474700-m03:/home/docker/cp-test.txt: (10.0327266s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test.txt": (9.9507483s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4265906700\001\cp-test_ha-474700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4265906700\001\cp-test_ha-474700-m03.txt: (9.9451149s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test.txt": (9.9249091s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt ha-474700:/home/docker/cp-test_ha-474700-m03_ha-474700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt ha-474700:/home/docker/cp-test_ha-474700-m03_ha-474700.txt: (17.6287367s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test.txt": (9.9017513s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test_ha-474700-m03_ha-474700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test_ha-474700-m03_ha-474700.txt": (9.9210525s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt ha-474700-m02:/home/docker/cp-test_ha-474700-m03_ha-474700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt ha-474700-m02:/home/docker/cp-test_ha-474700-m03_ha-474700-m02.txt: (17.2826685s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test.txt": (9.9061704s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test_ha-474700-m03_ha-474700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test_ha-474700-m03_ha-474700-m02.txt": (9.9028123s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt ha-474700-m04:/home/docker/cp-test_ha-474700-m03_ha-474700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m03:/home/docker/cp-test.txt ha-474700-m04:/home/docker/cp-test_ha-474700-m03_ha-474700-m04.txt: (17.5633212s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test.txt": (9.9959417s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test_ha-474700-m03_ha-474700-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test_ha-474700-m03_ha-474700-m04.txt": (10.0158164s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp testdata\cp-test.txt ha-474700-m04:/home/docker/cp-test.txt
E0722 00:54:11.898738    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp testdata\cp-test.txt ha-474700-m04:/home/docker/cp-test.txt: (10.0634104s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test.txt": (10.0983061s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4265906700\001\cp-test_ha-474700-m04.txt
E0722 00:54:32.385637    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4265906700\001\cp-test_ha-474700-m04.txt: (9.8945198s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test.txt": (10.112905s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt ha-474700:/home/docker/cp-test_ha-474700-m04_ha-474700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt ha-474700:/home/docker/cp-test_ha-474700-m04_ha-474700.txt: (17.2772536s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test.txt": (9.8974987s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test_ha-474700-m04_ha-474700.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700 "sudo cat /home/docker/cp-test_ha-474700-m04_ha-474700.txt": (9.9448113s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt ha-474700-m02:/home/docker/cp-test_ha-474700-m04_ha-474700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt ha-474700-m02:/home/docker/cp-test_ha-474700-m04_ha-474700-m02.txt: (17.3819788s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test.txt": (9.9141257s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test_ha-474700-m04_ha-474700-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m02 "sudo cat /home/docker/cp-test_ha-474700-m04_ha-474700-m02.txt": (9.9481717s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt ha-474700-m03:/home/docker/cp-test_ha-474700-m04_ha-474700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 cp ha-474700-m04:/home/docker/cp-test.txt ha-474700-m03:/home/docker/cp-test_ha-474700-m04_ha-474700-m03.txt: (17.2776255s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m04 "sudo cat /home/docker/cp-test.txt": (9.966572s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test_ha-474700-m04_ha-474700-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-474700 ssh -n ha-474700-m03 "sudo cat /home/docker/cp-test_ha-474700-m04_ha-474700-m03.txt": (9.9044964s)
--- PASS: TestMultiControlPlane/serial/CopyFile (657.51s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (202.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-117100 --driver=hyperv
E0722 01:02:35.588193    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-117100 --driver=hyperv: (3m22.1637664s)
--- PASS: TestImageBuild/serial/Setup (202.16s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.17s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-117100
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-117100: (10.1705853s)
--- PASS: TestImageBuild/serial/NormalBuild (10.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-117100
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-117100: (9.3142014s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.31s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-117100
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-117100: (7.9726572s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-117100
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-117100: (7.7847182s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (212.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-026200 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0722 01:07:15.117645    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-026200 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m32.7838401s)
--- PASS: TestJSONOutput/start/Command (212.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-026200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-026200 --output=json --user=testUser: (7.7855305s)
--- PASS: TestJSONOutput/pause/Command (7.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-026200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-026200 --output=json --user=testUser: (7.7215976s)
--- PASS: TestJSONOutput/unpause/Command (7.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (34.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-026200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-026200 --output=json --user=testUser: (34.0357972s)
--- PASS: TestJSONOutput/stop/Command (34.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.43s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-568100 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-568100 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (259.0652ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"489202f4-095c-4227-93d5-e492cae0f7fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-568100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"37babb11-19a0-4a89-b3ce-4208605cd1e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"f80cc6f9-65ac-4b52-9eb0-d51dceb7a0db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ec5db478-8dff-4184-ae71-6c4f3025eb9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"2ff7d391-70e7-4ba8-810b-b62884c4647d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"8615b0e8-8633-45af-b96d-34eb7a665dc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"75a5936c-a64d-4e00-9254-3472d398be48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:09:14.320249    4404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-568100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-568100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-568100: (1.1574026s)
--- PASS: TestErrorJSONOutput (1.43s)

                                                
                                    
x
+
TestMainNoArgs (0.23s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.23s)

                                                
                                    
x
+
TestMinikubeProfile (529.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-706900 --driver=hyperv
E0722 01:09:32.391518    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-706900 --driver=hyperv: (3m17.6929043s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-706900 --driver=hyperv
E0722 01:14:11.914916    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 01:14:32.404784    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-706900 --driver=hyperv: (3m23.4563435s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-706900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (21.3250047s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-706900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (21.065477s)
helpers_test.go:175: Cleaning up "second-706900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-706900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-706900: (44.9990445s)
helpers_test.go:175: Cleaning up "first-706900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-706900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-706900: (39.9748474s)
--- PASS: TestMinikubeProfile (529.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (154.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-856900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0722 01:19:11.943535    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 01:19:15.610273    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 01:19:32.406278    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-856900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m33.0729962s)
--- PASS: TestMountStart/serial/StartWithMountFirst (154.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.81s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-856900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-856900 ssh -- ls /minikube-host: (9.8065747s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.81s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (159.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-856900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-856900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m38.4904784s)
--- PASS: TestMountStart/serial/StartWithMountSecond (159.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.78s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-856900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-856900 ssh -- ls /minikube-host: (9.7801694s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.78s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (31.41s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-856900 --alsologtostderr -v=5
E0722 01:23:55.143129    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-856900 --alsologtostderr -v=5: (31.4070647s)
--- PASS: TestMountStart/serial/DeleteFirst (31.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.76s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-856900 ssh -- ls /minikube-host
E0722 01:24:11.932174    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-856900 ssh -- ls /minikube-host: (9.7571693s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.76s)

                                                
                                    
x
+
TestMountStart/serial/Stop (27.03s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-856900
E0722 01:24:32.407101    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-856900: (27.029931s)
--- PASS: TestMountStart/serial/Stop (27.03s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (119.07s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-856900
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-856900: (1m58.0541024s)
--- PASS: TestMountStart/serial/RestartStopped (119.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.48s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-856900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-856900 ssh -- ls /minikube-host: (9.4698471s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.48s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (455.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-227000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0722 01:29:11.931492    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 01:29:32.410893    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 01:34:11.938980    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 01:34:32.420357    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-227000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (7m10.3564698s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 status --alsologtostderr: (25.5954569s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (455.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- rollout status deployment/busybox: (3.4889439s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-5bv2m -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-5bv2m -- nslookup kubernetes.io: (1.8321472s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-tzrg5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-5bv2m -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-tzrg5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-5bv2m -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-227000 -- exec busybox-fc5497c4f-tzrg5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.65s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (257.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-227000 -v 3 --alsologtostderr
E0722 01:39:11.932286    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 01:39:32.412701    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-227000 -v 3 --alsologtostderr: (3m38.6878928s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 status --alsologtostderr: (38.7877292s)
--- PASS: TestMultiNode/serial/AddNode (257.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-227000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (12.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0722 01:40:35.160757    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.8276848s)
--- PASS: TestMultiNode/serial/ProfileList (12.83s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (379.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 status --output json --alsologtostderr: (37.3219156s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp testdata\cp-test.txt multinode-227000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp testdata\cp-test.txt multinode-227000:/home/docker/cp-test.txt: (9.8842436s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test.txt": (9.8897613s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2033814248\001\cp-test_multinode-227000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2033814248\001\cp-test_multinode-227000.txt: (9.7127785s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test.txt": (9.8305941s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000:/home/docker/cp-test.txt multinode-227000-m02:/home/docker/cp-test_multinode-227000_multinode-227000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000:/home/docker/cp-test.txt multinode-227000-m02:/home/docker/cp-test_multinode-227000_multinode-227000-m02.txt: (16.9351055s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test.txt": (9.9393912s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test_multinode-227000_multinode-227000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test_multinode-227000_multinode-227000-m02.txt": (9.947376s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000:/home/docker/cp-test.txt multinode-227000-m03:/home/docker/cp-test_multinode-227000_multinode-227000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000:/home/docker/cp-test.txt multinode-227000-m03:/home/docker/cp-test_multinode-227000_multinode-227000-m03.txt: (17.1802025s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test.txt": (9.8577418s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test_multinode-227000_multinode-227000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test_multinode-227000_multinode-227000-m03.txt": (10.0640668s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp testdata\cp-test.txt multinode-227000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp testdata\cp-test.txt multinode-227000-m02:/home/docker/cp-test.txt: (10.0062789s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test.txt": (10.0618868s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2033814248\001\cp-test_multinode-227000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2033814248\001\cp-test_multinode-227000-m02.txt: (9.907018s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test.txt": (9.8776071s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m02:/home/docker/cp-test.txt multinode-227000:/home/docker/cp-test_multinode-227000-m02_multinode-227000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m02:/home/docker/cp-test.txt multinode-227000:/home/docker/cp-test_multinode-227000-m02_multinode-227000.txt: (17.4104205s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test.txt"
E0722 01:44:11.942368    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test.txt": (9.8549806s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test_multinode-227000-m02_multinode-227000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test_multinode-227000-m02_multinode-227000.txt": (9.9105805s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m02:/home/docker/cp-test.txt multinode-227000-m03:/home/docker/cp-test_multinode-227000-m02_multinode-227000-m03.txt
E0722 01:44:32.431060    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m02:/home/docker/cp-test.txt multinode-227000-m03:/home/docker/cp-test_multinode-227000-m02_multinode-227000-m03.txt: (17.2468222s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test.txt": (9.8553693s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test_multinode-227000-m02_multinode-227000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test_multinode-227000-m02_multinode-227000-m03.txt": (9.8809855s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp testdata\cp-test.txt multinode-227000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp testdata\cp-test.txt multinode-227000-m03:/home/docker/cp-test.txt: (9.8576485s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test.txt": (9.8349639s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2033814248\001\cp-test_multinode-227000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2033814248\001\cp-test_multinode-227000-m03.txt: (10.1541856s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test.txt": (10.2847395s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m03:/home/docker/cp-test.txt multinode-227000:/home/docker/cp-test_multinode-227000-m03_multinode-227000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m03:/home/docker/cp-test.txt multinode-227000:/home/docker/cp-test_multinode-227000-m03_multinode-227000.txt: (17.3533852s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test.txt": (9.8379678s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test_multinode-227000-m03_multinode-227000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000 "sudo cat /home/docker/cp-test_multinode-227000-m03_multinode-227000.txt": (9.8353896s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m03:/home/docker/cp-test.txt multinode-227000-m02:/home/docker/cp-test_multinode-227000-m03_multinode-227000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 cp multinode-227000-m03:/home/docker/cp-test.txt multinode-227000-m02:/home/docker/cp-test_multinode-227000-m03_multinode-227000-m02.txt: (17.3807149s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m03 "sudo cat /home/docker/cp-test.txt": (9.9029786s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test_multinode-227000-m03_multinode-227000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 ssh -n multinode-227000-m02 "sudo cat /home/docker/cp-test_multinode-227000-m03_multinode-227000-m02.txt": (9.9664904s)
--- PASS: TestMultiNode/serial/CopyFile (379.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (80.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 node stop m03: (25.991032s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-227000 status: exit status 7 (27.0940266s)

                                                
                                                
-- stdout --
	multinode-227000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-227000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-227000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:47:27.877658   12988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-227000 status --alsologtostderr: exit status 7 (26.9533344s)

                                                
                                                
-- stdout --
	multinode-227000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-227000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-227000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 01:47:54.967367   11712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0722 01:47:55.058066   11712 out.go:291] Setting OutFile to fd 1000 ...
	I0722 01:47:55.059136   11712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:47:55.059136   11712 out.go:304] Setting ErrFile to fd 764...
	I0722 01:47:55.059136   11712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 01:47:55.071414   11712 out.go:298] Setting JSON to false
	I0722 01:47:55.071414   11712 mustload.go:65] Loading cluster: multinode-227000
	I0722 01:47:55.071414   11712 notify.go:220] Checking for updates...
	I0722 01:47:55.073970   11712 config.go:182] Loaded profile config "multinode-227000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 01:47:55.073970   11712 status.go:255] checking status of multinode-227000 ...
	I0722 01:47:55.074831   11712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:47:57.358001   11712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:47:57.358001   11712 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:47:57.358001   11712 status.go:330] multinode-227000 host status = "Running" (err=<nil>)
	I0722 01:47:57.358001   11712 host.go:66] Checking if "multinode-227000" exists ...
	I0722 01:47:57.358675   11712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:47:59.614348   11712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:47:59.614348   11712 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:47:59.614348   11712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:48:02.333909   11712 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:48:02.333909   11712 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:48:02.333909   11712 host.go:66] Checking if "multinode-227000" exists ...
	I0722 01:48:02.356867   11712 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 01:48:02.356951   11712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000 ).state
	I0722 01:48:04.571476   11712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:48:04.572257   11712 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:48:04.572324   11712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000 ).networkadapters[0]).ipaddresses[0]
	I0722 01:48:07.224850   11712 main.go:141] libmachine: [stdout =====>] : 172.28.193.96
	
	I0722 01:48:07.236436   11712 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:48:07.236648   11712 sshutil.go:53] new ssh client: &{IP:172.28.193.96 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000\id_rsa Username:docker}
	I0722 01:48:07.350313   11712 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9933052s)
	I0722 01:48:07.364009   11712 ssh_runner.go:195] Run: systemctl --version
	I0722 01:48:07.388606   11712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 01:48:07.419789   11712 kubeconfig.go:125] found "multinode-227000" server: "https://172.28.193.96:8443"
	I0722 01:48:07.419789   11712 api_server.go:166] Checking apiserver status ...
	I0722 01:48:07.431016   11712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 01:48:07.469829   11712 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2109/cgroup
	W0722 01:48:07.495163   11712 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2109/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 01:48:07.506953   11712 ssh_runner.go:195] Run: ls
	I0722 01:48:07.516601   11712 api_server.go:253] Checking apiserver healthz at https://172.28.193.96:8443/healthz ...
	I0722 01:48:07.524023   11712 api_server.go:279] https://172.28.193.96:8443/healthz returned 200:
	ok
	I0722 01:48:07.524023   11712 status.go:422] multinode-227000 apiserver status = Running (err=<nil>)
	I0722 01:48:07.524023   11712 status.go:257] multinode-227000 status: &{Name:multinode-227000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 01:48:07.524023   11712 status.go:255] checking status of multinode-227000-m02 ...
	I0722 01:48:07.526755   11712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:48:09.746114   11712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:48:09.752538   11712 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:48:09.752538   11712 status.go:330] multinode-227000-m02 host status = "Running" (err=<nil>)
	I0722 01:48:09.752686   11712 host.go:66] Checking if "multinode-227000-m02" exists ...
	I0722 01:48:09.753721   11712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:48:12.014467   11712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:48:12.014467   11712 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:48:12.025899   11712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:48:14.612460   11712 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:48:14.612460   11712 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:48:14.623755   11712 host.go:66] Checking if "multinode-227000-m02" exists ...
	I0722 01:48:14.636260   11712 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 01:48:14.636260   11712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m02 ).state
	I0722 01:48:16.821917   11712 main.go:141] libmachine: [stdout =====>] : Running
	
	I0722 01:48:16.824917   11712 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:48:16.824984   11712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-227000-m02 ).networkadapters[0]).ipaddresses[0]
	I0722 01:48:19.430003   11712 main.go:141] libmachine: [stdout =====>] : 172.28.193.41
	
	I0722 01:48:19.442745   11712 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:48:19.443403   11712 sshutil.go:53] new ssh client: &{IP:172.28.193.41 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-227000-m02\id_rsa Username:docker}
	I0722 01:48:19.539557   11712 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9032408s)
	I0722 01:48:19.553911   11712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 01:48:19.580821   11712 status.go:257] multinode-227000-m02 status: &{Name:multinode-227000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0722 01:48:19.581002   11712 status.go:255] checking status of multinode-227000-m03 ...
	I0722 01:48:19.582349   11712 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-227000-m03 ).state
	I0722 01:48:21.785765   11712 main.go:141] libmachine: [stdout =====>] : Off
	
	I0722 01:48:21.798497   11712 main.go:141] libmachine: [stderr =====>] : 
	I0722 01:48:21.798685   11712 status.go:330] multinode-227000-m03 host status = "Stopped" (err=<nil>)
	I0722 01:48:21.798746   11712 status.go:343] host is not running, skipping remaining checks
	I0722 01:48:21.798746   11712 status.go:257] multinode-227000-m03 status: &{Name:multinode-227000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (80.06s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (197.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 node start m03 -v=7 --alsologtostderr
E0722 01:49:11.949292    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 01:49:32.427003    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 node start m03 -v=7 --alsologtostderr: (2m42.0302256s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-227000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-227000 status -v=7 --alsologtostderr: (35.2779726s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (197.49s)

                                                
                                    
x
+
TestPreload (570.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-567500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0722 01:59:11.949392    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 01:59:32.442164    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-567500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m38.1023706s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-567500 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-567500 image pull gcr.io/k8s-minikube/busybox: (8.8882955s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-567500
E0722 02:04:11.973812    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-567500: (38.9403504s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-567500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0722 02:04:32.437125    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-567500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (3m14.1458515s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-567500 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-567500 image list: (7.4230445s)
helpers_test.go:175: Cleaning up "test-preload-567500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-567500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-567500: (42.8201147s)
--- PASS: TestPreload (570.33s)

                                                
                                    
x
+
TestScheduledStopWindows (338.46s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-148900 --memory=2048 --driver=hyperv
E0722 02:09:11.959138    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
E0722 02:09:15.668000    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
E0722 02:09:32.442143    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-264400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-148900 --memory=2048 --driver=hyperv: (3m23.5481984s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-148900 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-148900 --schedule 5m: (11.2084885s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-148900 -n scheduled-stop-148900
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-148900 -n scheduled-stop-148900: exit status 1 (10.0209879s)

                                                
                                                
** stderr ** 
	W0722 02:11:59.631269    6376 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-148900 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-148900 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.9911961s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-148900 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-148900 --schedule 5s: (11.0747652s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-148900
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-148900: exit status 7 (2.429277s)

                                                
                                                
-- stdout --
	scheduled-stop-148900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 02:13:30.732804    2936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-148900 -n scheduled-stop-148900
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-148900 -n scheduled-stop-148900: exit status 7 (2.4456695s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 02:13:33.159781   11740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-148900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-148900
E0722 02:13:55.196116    5100 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-979300\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-148900: (27.7246276s)
--- PASS: TestScheduledStopWindows (338.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-749900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-749900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (371.7863ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-749900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 02:14:03.359034   12464 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                    

Test skip (32/201)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (199.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-264400 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-264400 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 7684: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (199.92s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-264400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-264400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0360374s)

                                                
                                                
-- stdout --
	* [functional-264400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:21:57.789460   11632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0722 00:21:57.870088   11632 out.go:291] Setting OutFile to fd 936 ...
	I0722 00:21:57.870088   11632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:21:57.870088   11632 out.go:304] Setting ErrFile to fd 580...
	I0722 00:21:57.870088   11632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:21:57.893078   11632 out.go:298] Setting JSON to false
	I0722 00:21:57.897078   11632 start.go:129] hostinfo: {"hostname":"minikube6","uptime":122925,"bootTime":1721484792,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0722 00:21:57.897078   11632 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 00:21:57.902078   11632 out.go:177] * [functional-264400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0722 00:21:57.908096   11632 notify.go:220] Checking for updates...
	I0722 00:21:57.912083   11632 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:21:57.915078   11632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:21:57.917087   11632 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0722 00:21:57.920083   11632 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:21:57.923100   11632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:21:57.926083   11632 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:21:57.927082   11632 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-264400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-264400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0504633s)

                                                
                                                
-- stdout --
	* [functional-264400] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0722 00:22:02.843078    6976 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0722 00:22:02.914105    6976 out.go:291] Setting OutFile to fd 900 ...
	I0722 00:22:02.915143    6976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:22:02.915143    6976 out.go:304] Setting ErrFile to fd 704...
	I0722 00:22:02.915143    6976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 00:22:02.939092    6976 out.go:298] Setting JSON to false
	I0722 00:22:02.943079    6976 start.go:129] hostinfo: {"hostname":"minikube6","uptime":122930,"bootTime":1721484792,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0722 00:22:02.943079    6976 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 00:22:02.950147    6976 out.go:177] * [functional-264400] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0722 00:22:02.954103    6976 notify.go:220] Checking for updates...
	I0722 00:22:02.957119    6976 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0722 00:22:02.962098    6976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 00:22:02.965101    6976 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0722 00:22:02.968105    6976 out.go:177]   - MINIKUBE_LOCATION=19312
	I0722 00:22:02.970099    6976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 00:22:02.974101    6976 config.go:182] Loaded profile config "functional-264400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 00:22:02.975177    6976 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard